Electronics Letters (Sep 2019)

GenSynth: a generative synthesis approach to learning generative machines for generate efficient neural networks

  • Alexander Wong,
  • Mohammad Javad Shafiee,
  • Brendan Chwyl,
  • Francis Li

DOI
https://doi.org/10.1049/el.2019.1719
Journal volume & issue
Vol. 55, no. 18
pp. 986 – 989

Abstract

Read online

The tremendous potential exhibited by deep learning is often offset by architectural and computational complexity, making widespread deployment a challenge for edge scenarios such as mobile and other consumer devices. To tackle this challenge, we explore the following idea: Can we learn generative machines to automatically generate deep neural networks with efficient network architectures? In this study, we introduce the idea of generative synthesis, which is premised on the intricate interplay between a generator‐inquisitor pair that work in tandem to garner insights and learn to generate highly efficient deep neural networks that best satisfies operational requirements. Experimental results for image classification, semantic segmentation, and object detection tasks illustrate the efficacy of generative synthesis (GenSynth) in producing generators that automatically generate highly efficient deep neural networks (which we nickname FermiNets with higher model efficiency and lower computational costs (reaching >10× more efficient and fewer multiply‐accumulate operations than several tested state‐of‐the‐art networks), as well as higher energy efficiency (reaching >4× improvements in image inferences per joule consumed on a Nvidia Tegra X2 mobile processor). As such, GenSynth can be a powerful, generalised approach for accelerating and improving the building of deep neural networks for on‐device edge scenarios.

Keywords