IEEE Access (Jan 2020)

Facial Expression Rendering in Medical Training Simulators: Current Status and Future Directions

  • Thilina Dulantha Lalitharatne,
  • Yongxuan Tan,
  • Florence Leong,
  • Liang He,
  • Nejra Van Zalk,
  • Simon De Lusignan,
  • Fumiya Iida,
  • Thrishantha Nanayakkara

DOI
https://doi.org/10.1109/ACCESS.2020.3041173
Journal volume & issue
Vol. 8
pp. 215874 – 215891

Abstract

Read online

Recent technological advances in robotic sensing and actuation methods have prompted development of a range of new medical training simulators with multiple feedback modalities. Learning to interpret facial expressions of a patient during medical examinations or procedures has been one of the key focus areas in medical training. This article reviews facial expression rendering systems in medical training simulators that have been reported to date. Facial expression rendering approaches in other domains are also summarized to incorporate the knowledge from those works into developing systems for medical training simulators. Classifications and comparisons of medical training simulators with facial expression rendering are presented, and important design features, merits and limitations are outlined. Medical educators, students and developers are identified as the three key stakeholders involved with these systems and their considerations and needs are presented. Physical-virtual (hybrid) approaches provide multimodal feedback, present accurate facial expression rendering, and can simulate patients of different age, gender and ethnicity group; makes it more versatile than virtual and physical systems. The overall findings of this review and proposed future directions are beneficial to researchers interested in initiating or developing such facial expression rendering systems in medical training simulators.

Keywords