IET Communications (Aug 2024)
KDGAN: Knowledge distillation‐based model copyright protection for secure and communication‐efficient model publishing
Abstract
Abstract Deep learning‐based models have become ubiquitous across a wide range of applications, including computer vision, natural language processing, and robotics. Despite their efficacy, one of the significant challenges associated with deep neural network (DNN) models is the potential risk of copyright leakage due to the inherent vulnerability of the entire model architecture and the communication burden of the large models during publishing. So far, it is still challenging for us to safeguard the intellectual property rights of these DNN models while reducing the communication time during model publishing. To this end, this paper introduces a novel approach using knowledge distillation techniques aimed at training a surrogate model to stand in for the original DNN model. To be specific, a knowledge distillation generative adversarial network (KDGAN) model is proposed to train a student model capable of achieving remarkable performance levels while simultaneously safeguarding the copyright integrity of the original large teacher model and improving communication efficiency during model publishing. Herein, comprehensive experiments are conducted to showcase the efficacy of model copyright protection, communication‐efficient model publishing, and the superiority of the proposed KDGAN model over other copyright protection mechanisms.
Keywords