IEEE Access (Jan 2024)
Securing Pseudo-Model Parallelism-Based Collaborative DNN Inference for Edge Devices
Abstract
Collaborative Deep Neural Network Inference (CDNN) has emerged as one of the significant strategies for efficient and lightweight computation on resource-constrained devices (like drones), especially in the case of adverse events like natural disasters. Several strategies have been proposed in the implementation of collaborative inference. Notably, parllale CDNN (P-CDNN) emerges as a crucial strategy. In the context of P-CDNN, the CDNN effectively partitions and distributes input data across multiple drone devices, each equipped with pre-trained Deep Neural Network (DNN) models. However, this collaborative framework is vulnerable to several security concerns, especially when one or more devices are compromised. To address this challenge and enhance the robustness of CDNNs, specifically in drone applications, we propose an innovative solution that involves modification of P-CDNN (we called ins Pseudo-Model- Parallelism-based CDNN or PS-CDNN). We have also incorporated novel filters into the drone system to address attacks on intermediate data (feature maps). These filters are trained using multi-strength adversarial training techniques, employing adversarial intermediate data collected from collaborating drones. This reinforcement significantly strengthens CDNNs against potential adversarial attacks. We conducted comprehensive evaluations using two widely recognized benchmark datasets, state-of-the-art Convolutional Neural Network (CNN) models, and a collaborative setup to validate the effectiveness of our approach. These results showcase a remarkable average improvement of $\approx 2.1$ X in the top-1 accuracy of the model, highlighting the effectiveness and model-agnostic nature of our approach in drone applications. Furthermore, our approach exhibits exceptional adaptability to various DNN architectures while substantially bolstering the security of drone-based intelligence applications.
Keywords