Applied Sciences (Dec 2023)

Enhancing Privacy in Large Language Model with Homomorphic Encryption and Sparse Attention

  • Lexin Zhang,
  • Changxiang Li,
  • Qi Hu,
  • Jingjing Lang,
  • Sirui Huang,
  • Linyue Hu,
  • Jingwen Leng,
  • Qiuhan Chen,
  • Chunli Lv

DOI
https://doi.org/10.3390/app132413146
Journal volume & issue
Vol. 13, no. 24
p. 13146

Abstract

Read online

In response to the challenges of personal privacy protection in the dialogue models of the information era, this study introduces an innovative privacy-preserving dialogue model framework. This framework seamlessly incorporates Fully Homomorphic Encryption (FHE) technology with dynamic sparse attention (DSA) mechanisms, aiming to enhance the response efficiency and accuracy of dialogue systems without compromising user privacy. Experimental comparative analyses have confirmed the advantages of the proposed framework in terms of precision, recall, accuracy, and latency, with values of 0.92, 0.91, 0.92, and 15 ms, respectively. In particular, the newly proposed DSA module, while ensuring data security, significantly improves performance by up to 100 times compared to traditional multi-head attention mechanisms.

Keywords