Advanced Intelligent Systems (Jul 2024)

Fully Binarized Graph Convolutional Network Accelerator Based on In‐Memory Computing with Resistive Random‐Access Memory

  • Woyu Zhang,
  • Zhi Li,
  • Xinyuan Zhang,
  • Fei Wang,
  • Shaocong Wang,
  • Ning Lin,
  • Yi Li,
  • Jun Wang,
  • Jinshan Yue,
  • Chunmeng Dou,
  • Xiaoxin Xu,
  • Zhongrui Wang,
  • Dashan Shang

DOI
https://doi.org/10.1002/aisy.202300784
Journal volume & issue
Vol. 6, no. 7
pp. n/a – n/a

Abstract

Read online

Artificial intelligence for graph‐structured data has achieved remarkable success in applications such as recommendation systems, social networks, drug discovery, and circuit annotation. Graph convolutional networks (GCNs) are an effective way to learn representations of various graphs. The increasing size and complexity of graphs call for in‐memory computing (IMC) accelerators for GCN to alleviate massive data transmission between off‐chip memory and processing units. However, GCN implementation with IMC is challenging because of the large memory consumption, irregular memory access, and device nonidealities. Herein, a fully binarized GCN (BGCN) accelerator based on computational resistive random‐access memory (RRAM) through software–hardware codesign is presented. The essential operations including aggregation and combination in GCN are implemented on the RRAM crossbar arrays with cooperation between multiply‐and‐accumulation and content‐addressable memory operations. By leveraging the model quantization and IMC on the RRAM, the BGCN accelerator demonstrates less RRAM usage, high robustness to the device variations, high energy efficiency, and comparable classification accuracy compared to the current state‐of‐the‐art GCN accelerators on both graph classification task using the MUTAG and PTC datasets and node classification task using the Cora and CiteSeer datasets. These results provide a promising approach for edge intelligent systems to efficiently process graph‐structured data.

Keywords