IEEE Journal of the Electron Devices Society (Jan 2024)

Enhancement and Expansion of the Neural Network-Based Compact Model Using a Binning Method

  • Jinyoung Choi,
  • Hyunjoon Jeong,
  • Sangmin Woo,
  • Hyungmin Cho,
  • Yohan Kim,
  • Jeong-Taek Kong,
  • Soyoung Kim

DOI
https://doi.org/10.1109/JEDS.2023.3346380
Journal volume & issue
Vol. 12
pp. 65 – 73

Abstract

Read online

The artificial neural network (ANN)-based compact model has significant advantages over physics-based standard compact models such as BSIM-CMG because it can achieve higher accuracy over a wide range of geometric parameters. This makes it particularly suitable for design space exploration and optimization. However, the ANN-based compact model using only one set of model parameters (global-ANN) requires larger model sizes to achieve wider coverage and higher accuracy in order to capture the unpredictable nonlinearities of emerging devices. This results in reduced simulation speed and a trade-off between simulation accuracy, model coverage, and simulation speed makes it difficult to utilize ANN-based compact models in a variety of ways. To solve this problem, we propose the first ANN-based compact modeling flow using a binning method (binning-ANN) and we address the training requirements and data sparsity issues that may occur due to the binning method in ANNs. In addition, we develop a bin size optimization guideline for the binning-ANN. As a result, the binning-ANN not only has higher accuracy, but also much better expandability than existing methods.

Keywords