Journal of Pain Research (Nov 2024)
Employing the Artificial Intelligence Object Detection Tool YOLOv8 for Real-Time Pain Detection: A Feasibility Study
Abstract
Marco Cascella,1 Mohammed Naveed Shariff,2 Giuliano Lo Bianco,3 Federica Monaco,4 Francesca Gargano,5 Alessandro Simonini,6 Alfonso Maria Ponsiglione,7 Ornella Piazza1 1Anesthesia and Pain Medicine, Department of Medicine, Surgery and Dentistry “scuola Medica Salernitana”, University of Salerno, Baronissi, 84081, Italy; 2Department of AI&DS, Rajalakshmi Institute of Technology, Chennai, TN, India; 3Anesthesiology and Pain Department, Fondazione Istituto G. Giglio Cefalù, Palermo, Italy; 4Anesthesia and Pain Medicine, ASL NA1, Napoli, Italy; 5Anesthesia and Intensive Care, U.O.C. Fondazione Policlinico Campus Bio-Medico, Roma, Italy; 6Pediatric Anesthesia and Intensive Care Unit, Salesi Children’s Hospital, Ancona, Italy; 7Department of Electrical Engineering and Information Technology, University of Naples “federico II”, Naples, 0125, ItalyCorrespondence: Giuliano Lo Bianco, Anesthesiology and Pain Department, Fondazione Istituto G. Giglio Cefalù, Palermo, Italy, Email [email protected]: Effective pain management is crucial for patient care, impacting comfort, recovery, and overall well-being. Traditional subjective pain assessment methods can be challenging, particularly in specific patient populations. This research explores an alternative approach using computer vision (CV) to detect pain through facial expressions.Methods: The study implements the YOLOv8 real-time object detection model to analyze facial expressions indicative of pain. Given four pain datasets, a dataset of pain-expressing faces was compiled, and each image was carefully labeled based on the presence of pain-associated Action Units (AUs). The labeling distinguished between two classes: pain and no pain. The pain category included specific AUs (AU4, AU6, AU7, AU9, AU10, and AU43) following the Prkachin and Solomon Pain Intensity (PSPI) scoring method. Images showing these AUs with a PSPI score above 2 were labeled as expressing pain. The manual labeling process utilized an open-source tool, makesense.ai, to ensure precise annotation. The dataset was then split into training and testing subsets, each containing a mix of pain and no-pain images. The YOLOv8 model underwent iterative training over 10 epochs. The model’s performance was validated using precision, recall, and mean Average Precision (mAP) metrics, and F1 score.Results: When considering all classes collectively, our model attained a mAP of 0.893 at a threshold of 0.5. The precision for “pain” and “nopain” detection was 0.868 and 0.919, respectively. F1 scores for the classes “pain”, “nopain”, and “all classes” reached a peak value of 0.80. Finally, the model was tested on the Delaware dataset and in a real-world scenario.Discussion: Despite limitations, this study highlights the promise of using real-time computer vision models for pain detection, with potential applications in clinical settings. Future research will focus on evaluating the model’s generalizability across diverse clinical scenarios and its integration into clinical workflows to improve patient care.Keywords: pain, artificial intelligence, automatic pain assessment, computer vision, action units