Nature Communications (Jan 2025)

Random resistive memory-based deep extreme point learning machine for unified visual processing

  • Shaocong Wang,
  • Yizhao Gao,
  • Yi Li,
  • Woyu Zhang,
  • Yifei Yu,
  • Bo Wang,
  • Ning Lin,
  • Hegan Chen,
  • Yue Zhang,
  • Yang Jiang,
  • Dingchen Wang,
  • Jia Chen,
  • Peng Dai,
  • Hao Jiang,
  • Peng Lin,
  • Xumeng Zhang,
  • Xiaojuan Qi,
  • Xiaoxin Xu,
  • Hayden So,
  • Zhongrui Wang,
  • Dashan Shang,
  • Qi Liu,
  • Kwang-Ting Cheng,
  • Ming Liu

DOI
https://doi.org/10.1038/s41467-025-56079-3
Journal volume & issue
Vol. 16, no. 1
pp. 1 – 11

Abstract

Read online

Abstract Visual sensors, including 3D light detection and ranging, neuromorphic dynamic vision sensor, and conventional frame cameras, are increasingly integrated into edge-side intelligent machines. However, their data are heterogeneous, causing complexity in system development. Moreover, conventional digital hardware is constrained by von Neumann bottleneck and the physical limit of transistor scaling. The computational demands of training ever-growing models further exacerbate these challenges. We propose a hardware-software co-designed random resistive memory-based deep extreme point learning machine. Data-wise, the multi-sensory data are unified as point set and processed universally. Software-wise, most weights are exempted from training. Hardware-wise, nanoscale resistive memory enables collocation of memory and processing, and leverages the inherent programming stochasticity for generating random weights. The co-design system is validated on 3D segmentation (ShapeNet), event recognition (DVS128 Gesture), and image classification (Fashion-MNIST) tasks, achieving accuracy comparable to conventional systems while delivering 6.78 × /21.04 × /15.79 × energy efficiency improvements and 70.12%/89.46%/85.61% training cost reductions.