IEEE Access (Jan 2023)

SignExplainer: An Explainable AI-Enabled Framework for Sign Language Recognition With Ensemble Learning

  • Deep R. Kothadiya,
  • Chintan M. Bhatt,
  • Amjad Rehman,
  • Faten S. Alamri,
  • Tanzila Saba

DOI
https://doi.org/10.1109/ACCESS.2023.3274851
Journal volume & issue
Vol. 11
pp. 47410 – 47419

Abstract

Read online

Deep learning has significantly aided current advancements in artificial intelligence. Deep learning techniques have significantly outperformed more than typical machine learning approaches, in various fields like Computer Vision, Natural Language Processing (NLP), Robotics Science, and Human-Computer Interaction (HCI). Deep learning models are ineffective in outlining their fundamental mechanism. That’s the reason the deep learning model mainly consider as Black-Box. To establish confidence and responsibility, deep learning applications need to explain the model’s decision in addition to the prediction of results. The explainable AI (XAI) research has created methods that offer these interpretations for already trained neural networks. It’s highly recommended for computer vision tasks relevant to medical science, defense system, and many more. The proposed study is associated with XAI for Sign Language Recognition. The methodology uses an attention-based ensemble learning approach to create a prediction model more accurate. The proposed methodology used ResNet50 with the Self Attention model to design ensemble learning architecture. The proposed ensemble learning approach has achieved remarkable accuracy at 98.20%. In interpreting ensemble learning prediction, the author has proposed SignExplainer to explain the relevancy (in percentage) of predicted results. SignExplainer has illustrated excellent results, compared to other conventional Explainable AI models reported in state of the art.

Keywords