IEEE Access (Jan 2020)

Echo-ID: Smart User Identification Leveraging Inaudible Sound Signals

  • Syed Wajid Ali Shah,
  • Arash Shaghaghi,
  • Salil S. Kanhere,
  • Jin Zhang,
  • Adnan Anwar,
  • Robin Doss

DOI
https://doi.org/10.1109/ACCESS.2020.3031899
Journal volume & issue
Vol. 8
pp. 194508 – 194522

Abstract

Read online

In this article, we present a novel user identification mechanism for smart spaces called Echo-ID (referred to as E-ID). Our solution relies on inaudible sound signals for capturing the user's behavioral tapping/typing characteristics while s/he types the PIN on a PIN-PAD, and uses them to identify the corresponding user from a set of ${N}$ enrolled inhabitants. E-ID proposes an all-inclusive pipeline that generates and transmits appropriate sound signals, and extracts a user-specific imprint from the recorded signals (E-Sign). For accurate identification of the corresponding user given an E-Sign sample, E-ID makes use of deep-learning (i.e., CNN for feature extraction) and SVM classifier (for making the identification decision). We implemented a proof of the concept of E-ID by leveraging the commodity speaker and microphone. Our evaluations revealed that E-ID can identify the users with an average accuracy of 93% to 78% from an enrolled group of 2-5 subjects, respectively.

Keywords