Array (Sep 2023)

An effective stacked autoencoder based depth separable convolutional neural network model for face mask detection

  • Sundaravadivazhagan Balasubaramanian,
  • Robin Cyriac,
  • Sahana Roshan,
  • Kulandaivel Maruthamuthu Paramasivam,
  • Boby Chellanthara Jose

Journal volume & issue
Vol. 19
p. 100294

Abstract

Read online

The COVID-19 pandemic has been infecting the entire world over the past years. To prevent the spread of COVID-19, people have acclimatised to the new normal, which includes working from home, communicating online, and maintaining personal cleanliness. There are numerous tools required to prepare to compact transmissions in the future. One of these elements for protecting individuals from fatal virus transmission is the mask. Studies have indicated that wearing a mask may help to reduce the risk of viral transmission of all kinds. It causes many public places to take efforts to ensure that its guests wear adequate face masks and keep a safe distance from one another. Screening systems need to be installed at the doors of businesses, schools, government buildings, private offices, and/or other important areas. A variety of face detection models have been designed using various algorithms and techniques. Most of the articles in the previously published research have not worked on dimensionality reduction in conjunction with depth-wise separable neural networks. The necessity of determining the identities of people who do not cover their faces when they are in public is the driving factor for the development of this methodology. This research work proposes a deep learning technique to determine if a person is wearing mask or not and identifies whether it is properly worn or not. Stacked Auto Encoder (SAE) technique is implemented by stacking the following components: Principal Component Analysis (PCA) and Depth-wise Separable Convolutional Neural Network (DWSC-NN). PCA is used to reduce the irrelevant features in the images and resulted high true positive rate in the detection of mask. We achieved an accuracy score of 94.16% and an F1 score of 96.009% by the application of the method described in this research.

Keywords