Nature Communications (Jun 2021)
An in-memory computing architecture based on two-dimensional semiconductors for multiply-accumulate operations
Abstract
In standard computing architectures, memory and logic circuits are separated, a feature that slows matrix operations vital to deep learning algorithms. Here, the authors present an alternate in-memory architecture and demonstrate a feasible approach for analog matrix multiplication.