ETRI Journal (Jul 2021)

A low‐cost compensated approximate multiplier for Bfloat16 data processing on convolutional neural network inference

  • HyunJin Kim

DOI
https://doi.org/10.4218/etrij.2020-0370
Journal volume & issue
Vol. 43, no. 4
pp. 684 – 693

Abstract

Read online

This paper presents a low‐cost two‐stage approximate multiplier for bfloat16 (brain floating‐point) data processing. For cost‐efficient approximate multiplication, the first stage implements Mitchell's algorithm that performs the approximate multiplication using only two adders. The second stage adopts the exact multiplication to compensate for the error from the first stage by multiplying error terms and adding its truncated result to the final output. In our design, the low‐cost multiplications in both stages can reduce hardware costs significantly and provide low relative errors by compensating for the error from the first stage. We apply our approximate multiplier to the convolutional neural network (CNN) inferences, which shows small accuracy drops with well‐known pre‐trained models for the ImageNet database. Therefore, our design allows low‐cost CNN inference systems with high test accuracy.

Keywords