JMIR Human Factors (Sep 2024)

Evaluating the Usability and Quality of a Clinical Mobile App for Assisting Physicians in Head Computed Tomography Scan Ordering: Mixed Methods Study

  • Zahra Meidani,
  • Aydine Omidvar,
  • Hossein Akbari,
  • Fatemeh Asghari,
  • Reza Khajouei,
  • Zahra Nazemi,
  • Ehsan Nabovati,
  • Felix Holl

DOI
https://doi.org/10.2196/55790
Journal volume & issue
Vol. 11
p. e55790

Abstract

Read online

BackgroundAmong the numerous factors contributing to health care providers’ engagement with mobile apps, including user characteristics (eg, dexterity, anatomy, and attitude) and mobile features (eg, screen and button size), usability and quality of apps have been introduced as the most influential factors. ObjectiveThis study aims to investigate the usability and quality of the Head Computed Tomography Scan Appropriateness Criteria (HAC) mobile app for physicians’ computed tomography scan ordering. MethodsOur study design was primarily based on methodological triangulation by using mixed methods research involving quantitative and qualitative think-aloud usability testing, quantitative analysis of the Mobile Apps Rating Scale (MARS) for quality assessment, and debriefing across 3 phases. In total, 16 medical interns participated in quality assessment and testing usability characteristics, including efficiency, effectiveness, learnability, errors, and satisfaction with the HAC app. ResultsThe efficiency and effectiveness of the HAC app were deemed satisfactory, with ratings of 97.8% and 96.9%, respectively. MARS assessment scale indicated the overall favorable quality score of the HAC app (82 out of 100). Scoring 4 MARS subscales, Information (73.37 out of 100) and Engagement (73.48 out of 100) had the lowest scores, while Aesthetics had the highest score (87.86 out of 100). Analysis of the items in each MARS subscale revealed that in the Engagement subscale, the lowest score of the HAC app was “customization” (63.6 out of 100). In the Functionality subscale, the HAC app’s lowest value was “performance” (67.4 out of 100). Qualitative think-aloud usability testing of the HAC app found notable usability issues grouped into 8 main categories: lack of finger-friendly touch targets, poor search capabilities, input problems, inefficient data presentation and information control, unclear control and confirmation, lack of predictive capabilities, poor assistance and support, and unclear navigation logic. ConclusionsEvaluating the quality and usability of mobile apps using a mixed methods approach provides valuable information about their functionality and disadvantages. It is highly recommended to embrace a more holistic and mixed methods strategy when evaluating mobile apps, because results from a single method imperfectly reflect trustworthy and reliable information regarding the usability and quality of apps.