IEEE Access (Jan 2024)

Enhancing User Trust and Interpretability in AI-Driven Feature Request Detection for Mobile App Reviews: An Explainable Approach

  • Ishaya Gambo,
  • Rhodes Massenon,
  • Chia-Chen Lin,
  • Roseline Oluwaseun Ogundokun,
  • Saurabh Agarwal,
  • Wooguil Pak

DOI
https://doi.org/10.1109/ACCESS.2024.3443527
Journal volume & issue
Vol. 12
pp. 114023 – 114045

Abstract

Read online

Mobile app developers struggle to prioritize updates by identifying feature requests within user reviews. While machine learning models can assist, their complexity often hinders transparency and trust. This paper presents an explainable Artificial Intelligence (AI) approach that combines advanced explanation techniques with engaging visualizations to address this issue. Our system integrates a bidirectional Long Short-Term Memory (BiLSTM) model with attention mechanisms, enhanced by Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP). We evaluate this approach on a diverse dataset of 150,000 app reviews, achieving an F1 score of 0.82 and 89% accuracy, significantly outperforming baseline Support Vector Machine (F1: 0.66) and Convolutional Neural Network (CNN) (F1: 0.72) models. Our empirical user studies with developers demonstrate that our explainable approach improves trust (27%) when explanations are provided and correct interpretation (73%). The system’s interactive visualizations allowed developers to validate predictions, with over 80% overlap between model-highlighted phrases and human annotations for feature requests. These findings highlight the importance of integrating explainable AI into real-world software engineering workflows. The paper’s results and future directions provide a promising approach for feature request detection in app reviews to create more transparent, trustworthy, and effective AI systems.

Keywords