APL Machine Learning (Sep 2024)

Domain wall and magnetic tunnel junction hybrid for on-chip learning in UNet architecture

  • Venkatesh Vadde,
  • Bhaskaran Muralidharan,
  • Abhishek Sharma

DOI
https://doi.org/10.1063/5.0214042
Journal volume & issue
Vol. 2, no. 3
pp. 036101 – 036101-11

Abstract

Read online

We present a spintronic device based hardware implementation of UNet for segmentation tasks. Our approach involves designing hardware for convolution, deconvolution, rectified activation function (ReLU), and max pooling layers of the UNet architecture. We designed the convolution and deconvolution layers of the network using the synaptic behavior of the domain wall MTJ. We also construct the ReLU and max pooling functions of the network utilizing the spin hall driven orthogonal current injected MTJ. To incorporate the diverse physics of spin-transport, magnetization dynamics, and CMOS elements in our UNet design, we employ a hybrid simulation setup that couples micromagnetic simulation, non-equilibrium Green’s function, and SPICE simulation along with network implementation. We evaluate our UNet design on the CamVid dataset and achieve segmentation accuracies of 83.71% on test data, on par with the software implementation with 821 mJ of energy consumption for on-chip training over 150 epochs. We further demonstrate nearly one order of magnitude (10×) improvement in the energy requirement of the network using unstable ferromagnet (Δ = 4.58) over the stable ferromagnet (Δ = 45) based ReLU and max pooling functions while maintaining similar accuracy. The hybrid architecture comprising domain wall MTJ and unstable FM-based MTJ leads to an on-chip energy consumption of 85.79 mJ during training, with a testing energy cost of 1.55 µJ.