Mathematics (Oct 2024)
TraceGuard: Fine-Tuning Pre-Trained Model by Using Stego Images to Trace Its User
Abstract
Currently, a significant number of pre-trained models are published online to provide services to users owing to the rapid maturation and popularization of machine learning as a service (MLaaS). Some malicious users have pre-trained models illegally to redeploy them and earn money. However, most of the current methods focus on verifying the copyright of the model rather than tracing responsibility for the suspect model. In this study, TraceGuard is proposed, the first framework based on steganography for tracing a suspect self-supervised learning (SSL) pre-trained model, to ascertain which authorized user illegally released the suspect model or if the suspect model is independent. Concretely, the framework contains an encoder and decoder pair and the SSL pre-trained model. Initially, the base pre-trained model is frozen, and the encoder and decoder are jointly learned to ensure the two modules can embed the secret key into the cover image and extract the secret key from the embedding output by the base pre-trained model. Subsequently, the base pre-trained model is fine-tuned using stego images to implement a fingerprint while the encoder and decoder are frozen. To assure the effectiveness and robustness of the fingerprint and the utility of fingerprinted pre-trained models, three alternate steps of model stealing simulations, fine-tuning for uniqueness, and fine-tuning for utility are designed. Finally, the suspect pre-trained model is traced to its user by querying stego images. Experimental results demonstrate that TraceGuard can reliably trace suspect models and is robust against common fingerprint removal attacks such as fine-tuning, pruning, and model stealing. In the future, we will further improve the robustness against model stealing attack.
Keywords