IEEE Journal on Exploratory Solid-State Computational Devices and Circuits (Jan 2021)

Exploring the Feasibility of Using 3-D XPoint as an In-Memory Computing Accelerator

  • Masoud Zabihi,
  • Salonik Resch,
  • Husrev Cilasun,
  • Zamshed I. Chowdhury,
  • Zhengyang Zhao,
  • Ulya R. Karpuzcu,
  • Jian-Ping Wang,
  • Sachin S. Sapatnekar

DOI
https://doi.org/10.1109/JXCDC.2021.3112238
Journal volume & issue
Vol. 7, no. 2
pp. 88 – 96

Abstract

Read online

This article describes how 3-D XPoint memory arrays can be used as in-memory computing accelerators. We first show that thresholded matrix-vector multiplication (TMVM), the fundamental computational kernel in many applications including machine learning (ML), can be implemented within a 3-D XPoint array without requiring data to leave the array for processing. Using the implementation of TMVM, we then discuss the implementation of a binary neural inference engine. We discuss the application of the core concept to address issues such as system scalability, where we connect multiple 3-D XPoint arrays, and power integrity, where we analyze the parasitic effects of metal lines on noise margins. To assure power integrity within the 3-D XPoint array during this implementation, we carefully analyze the parasitic effects of metal lines on the accuracy of the implementations. We quantify the impact of parasitics on limiting the size and configuration of a 3-D XPoint array, and estimate the maximum acceptable size of a 3-D XPoint subarray.

Keywords