APL Machine Learning (Mar 2024)

Training self-learning circuits for power-efficient solutions

  • Menachem Stern,
  • Sam Dillavou,
  • Dinesh Jayaraman,
  • Douglas J. Durian,
  • Andrea J. Liu

DOI
https://doi.org/10.1063/5.0181382
Journal volume & issue
Vol. 2, no. 1
pp. 016114 – 016114-16

Abstract

Read online

As the size and ubiquity of artificial intelligence and computational machine learning models grow, the energy required to train and use them is rapidly becoming economically and environmentally unsustainable. Recent laboratory prototypes of self-learning electronic circuits, such as “physical learning machines,” open the door to analog hardware that directly employs physics to learn desired functions from examples at a low energy cost. In this work, we show that this hardware platform allows for an even further reduction in energy consumption by using good initial conditions and a new learning algorithm. Using analytical calculations, simulations, and experiments, we show that a trade-off emerges when learning dynamics attempt to minimize both the error and the power consumption of the solution—greater power reductions can be achieved at the cost of decreasing solution accuracy. Finally, we demonstrate a practical procedure to weigh the relative importance of error and power minimization, improving the power efficiency given a specific tolerance to error.