IEEE Access (Jan 2021)

Neuroevolution-Based Efficient Field Effect Transistor Compact Device Models

  • Ya-Wen Ho,
  • Tejender Singh Rawat,
  • Zheng-Kai Yang,
  • Sparsh Pratik,
  • Guan-Wen Lai,
  • Yen-Liang Tu,
  • Albert Lin

DOI
https://doi.org/10.1109/ACCESS.2021.3130254
Journal volume & issue
Vol. 9
pp. 159048 – 159058

Abstract

Read online

Artificial neural networks (ANN) and multilayer perceptrons (MLP) have proved to be efficient in terms of designing highly accurate semiconductor device compact models (CM). Their ability to update their weight and biases through the backpropagation method makes them highly useful in learning the task. To improve the learning, MLP usually requires large networks and thus a large number of model parameters, which significantly increases the simulation time in circuit simulation. Hence, optimizing the network architecture and topology is always a tedious yet important task. In this work, we tune the network topology using neuro-evolution (NE) to develop semiconductor device CMs. With input and output layers defined, we have allowed a genetic algorithm (GA), a gradient-free algorithm, to tune the network architecture in combination with Adam, a gradient-based backpropagation algorithm, for the network weight and bias optimization. In addition, we implemented the MLP model using a similar number of parameters as the baseline for comparison. It is observed that in most of the cases, the NE models exhibit a lower root mean square error (RMSE) and require fewer training epochs compared to the MLP baseline models. For instance, for patience number 100 with different number of model parameters, the RMSE for test dataset using NE and MLP in unit of log(ampere) are 0.1461, 0.0985, 0.1274, 0.0971, 0.0705, and 0.2254, 0.1423, 0.1429, 0.1425, 0.1391, respectively, for the 28nm technology node at foundry. The code is available at Github.

Keywords