IEEE Access (Jan 2024)
Preserving Data Utility in Differentially Private Smart Home Data
Abstract
The development of smart sensors and appliances can provide a lot of services. Nevertheless, the act of aggregating data containing sensitive information related to privacy in a single location poses significant issues. Such information can be misused by a malicious attacker. Also, some previous studies attempted to apply privacy mechanisms, but they decreased data utility. In this paper, we propose privacy protection mechanisms to preserve privacy-sensitive sensor data generated in a smart home. We leverage Rényi differential privacy (RDP) to preserve privacy. However, the preliminary result showed that using only RDP still significantly decreases the utility of data. Thus, a novel scheme called feature merging anonymization (FMA) is proposed to preserve privacy while maintaining data utility by merging feature dataframes of the same activities from other homes. Also, the expected trade-off is defined so that data utility should be greater than the privacy preserved. To evaluate the proposed techniques, we define privacy preservation and data utility as inverse accuracy of person identification (PI) and accuracy of activity recognition (AR), respectively. We trained the AR and PI models for two cases with and without FMA, using 2 smart-home open datasets i.e. the HIS and Toyota dataset. As a result, we could lower the accuracy of PI in the HIS and Toyota dataset to 73.85% and 41.18% with FMA respectively compared to 100% without FMA, while maintaining the accuracy of AR at 94.62% and 87.3% with FMA compared to 98.58% and 89.28% without FMA in the HIS and Toyota dataset, respectively. Another experiment was conducted to explore the feasibility of implementing FMA in a local server by partially merging frames of the original activity with frames of other activities at different merging ratios. The results show that the local server can still satisfy the expected trade-off at some ratios.
Keywords