Sensors (Nov 2023)
Mini-Batch Alignment: A Deep-Learning Model for Domain Factor-Independent Feature Extraction for Wi-Fi–CSI Data
Abstract
Unobtrusive sensing (device-free sensing) aims to embed sensing into our daily lives. This is achievable by re-purposing communication technologies already used in our environments. Wireless Fidelity (Wi-Fi) sensing, using Channel State Information (CSI) measurement data, seems to be a perfect fit for this purpose since Wi-Fi networks are already omnipresent. However, a big challenge in this regard is CSI data being sensitive to ‘domain factors’ such as the position and orientation of a subject performing an activity or gesture. Due to these factors, CSI signal disturbances vary, causing domain shifts. Shifts lead to the lack of inference generalization, i.e., the model does not always perform well on unseen data during testing. We present a domain factor-independent feature-extraction pipeline called ‘mini-batch alignment’. Mini-batch alignment steers a feature-extraction model’s training process such that it is unable to separate intermediate feature-probability density functions of input data batches seen previously from the current input data batch. By means of this steering technique, we hypothesize that mini-batch alignment (i) absolves the need for providing a domain label, (ii) reduces pipeline re-building and re-training likelihood when encountering latent domain factors, and (iii) absolves the need for extra model storage and training time. We test this hypothesis via a vast number of performance-evaluation experiments. The experiments involve both one- and two-domain-factor leave-out cross-validation, two open-source gesture-recognition datasets called SignFi and Widar3, two pre-processed input types called Doppler Frequency Spectrum (DFS) and Gramian Angular Difference Field (GADF), and several existing domain-shift mitigation techniques. We show that mini-batch alignment performs on a par with other domain-shift mitigation techniques in both position and orientation one-domain leave-out cross-validation using the Widar3 dataset and DFS as input type. When considering a memory-complexity-reduced version of the GADF as input type, mini-batch alignment shows hints of recuperating performance regarding a standard baseline model to the extent that no additional performance due to weight steering is lost in both one-domain-factor leave-out and two-orientation-domain-factor leave-out cross-validation scenarios. However, this is not enough evidence that the mini-batch alignment hypothesis is valid. We identified pitfalls leading up to the hypothesis invalidation: (i) lack of good-quality benchmark datasets, (ii) invalid probability distribution assumptions, and (iii) non-linear distribution scaling issues.
Keywords