IEEE Access (Jan 2021)

Efficient Sum-Check Protocol for Convolution

  • Chanyang Ju,
  • Hyeonbum Lee,
  • Heewon Chung,
  • Jae Hong Seo,
  • Sungwook Kim

DOI
https://doi.org/10.1109/ACCESS.2021.3133442
Journal volume & issue
Vol. 9
pp. 164047 – 164059

Abstract

Read online

Many applications have recently adopted machine learning and deep learning techniques. Convolutional neural networks (CNNs) are made up of sequential operations including activation, pooling, convolution, and fully connected layer, and their computation cost is enormous, with convolution and fully connected layer dominating. In general, a user with insufficient computer capacity delegated certain tasks to a server with sufficient computing power, and the user may want to verify that the outputs are truly machine learning model predictions. In this paper, we are interested in verifying that the delegation of CNNs, one of the deep learning models for image recognition and classification, is correct. Specifically, we focus on the verifiable computation of matrix multiplications in a CNN convolutional layer. We use Thaler’s idea (CRYPTO 2013) for validating matrix multiplication operations and present a predicate function based on the insight that the sequence of operations can be viewed as sequential matrix multiplication. Furthermore, we lower the cost of proving by splitting a convolution operation into two halves. As a result, we can provide an efficient sum-check protocol for a convolution operation that, like the state-of-the-art zkCNN (ePrint 2021) approach, achieves asymptotically optimal proving cost. The suggested protocol is about $2\times $ cheaper than zkCNN in terms of communication costs. We also propose a verified inference system based on our method as the fundamental building component.

Keywords