IEEE Access (Jan 2022)

A Generalized Workflow for Creating Machine Learning-Powered Compact Models for Multi-State Devices

  • Jack Hutchins,
  • Shamiul Alam,
  • Andre Zeumault,
  • Karsten Beckmann,
  • Nathaniel Cady,
  • Garrett S. Rose,
  • Ahmedullah Aziz

DOI
https://doi.org/10.1109/ACCESS.2022.3218333
Journal volume & issue
Vol. 10
pp. 115513 – 115519

Abstract

Read online

The predictive capability of existing physical descriptions of multi-state devices (e.g., oxide memristors, ferroelectrics, antiferroelectric, etc.) cannot be fully leveraged in circuit simulations due to practical limitations regarding the complexity of compact models. We attempt to circumvent this issue by adopting a machine-learning (ML) - based approach to develop a compact model that retains the full physical description of these devices. ML-based modeling approaches have garnered immense interest in recent years and have already been successfully utilized in making models for several novel devices. A known hurdle for ML-based compact modeling is the need for a large amount of experimental data to properly train the model. We propose a method to simulate additional data by duplicating the data and adding Gaussian Noise to the duplicates. We propose a generalized framework to - (i) facilitate efficient training of ML-based device models, (ii) conduct seamless conversion to a Verilog-A model, and (iii) interface with industry-standard circuit simulators (HSPICE, SPECTRE, etc.). We demonstrate the capabilities of our framework using the hafnium oxide (HfOx) memristor as a test device. As the source of the training data, we use a physical model that unifies detailed atomic-level descriptions with self-consistent evaluation of electronic transport. In addition, we test our model with experimental data for multiple memristor samples and repeated cycles of the same sample. Our ML-based framework prepares a circuit-compatible compact model to facilitate system-level simulations. With our model, we achieve a root mean squared error (RMSE) of 0.000863 and an R2 of 0.977371 on our testing data.

Keywords