网络与信息安全学报 (Apr 2024)

DNNobfus: a study on obfuscation-based edge-side model protection framework

  • Feiyang SONG, Xinmiao ZHAO, Fei YAN, Binlin CHENG, Liqiang ZHANG, Xiaolin YANG, Yang WANG

DOI
https://doi.org/10.11959/j.issn.2096-109x.2024019
Journal volume & issue
Vol. 10, no. 2
pp. 143 – 153

Abstract

Read online

The proliferation of artificial intelligence models has rendered them vulnerable to a myriad of security threats. The extensive integration of deep learning models into edge devices has introduced novel security challenges. Given the analogous structural characteristics of deep neural networks, adversaries can employ decompilation tactics to extract model structural details and parameters, facilitating the reconstruction of these models. Such actions can compromise the intellectual property rights of the model and increase the risk of white-box attacks. To mitigate the capability of model decompilers to locate and identify model operators, acquire parameters, and parse network topologies, an obfuscation framework was proposed. This framework was embedded within the model compilation process to safeguard against model extraction attacks. During the frontend optimization phase of deep learning compilers, three obfuscation techniques were developed and integrated: operator obfuscation, parameter obfuscation, and network topology obfuscation. The framework introduced opaque predicates, incorporated fake control flows, and embedded redundant memory access to thwart the reverse engineering efforts of model decompilers. The experimental findings demonstrate that the obfuscation framework, named DNNobfus, significantly diminishes the accuracy of state-of-the-art model decompilation tools in identifying model operator types and network connections to 21.63% and 48.24%, respectively. Additionally, DNNobfus achieves an average time efficiency of 67.93% and an average space efficiency of 88.37%, surpassing the performance of the obfuscation tool Obfuscator-LLVM in both respects.

Keywords