Scientific Reports (Jan 2022)

Contactless facial video recording with deep learning models for the detection of atrial fibrillation

  • Yu Sun,
  • Yin-Yin Yang,
  • Bing-Jhang Wu,
  • Po-Wei Huang,
  • Shao-En Cheng,
  • Bing-Fei Wu,
  • Chun-Chang Chen

DOI
https://doi.org/10.1038/s41598-021-03453-y
Journal volume & issue
Vol. 12, no. 1
pp. 1 – 10

Abstract

Read online

Abstract Atrial fibrillation (AF) is often asymptomatic and paroxysmal. Screening and monitoring are needed especially for people at high risk. This study sought to use camera-based remote photoplethysmography (rPPG) with a deep convolutional neural network (DCNN) learning model for AF detection. All participants were classified into groups of AF, normal sinus rhythm (NSR) and other abnormality based on 12-lead ECG. They then underwent facial video recording for 10 min with rPPG signals extracted and segmented into 30-s clips as inputs of the training of DCNN models. Using voting algorithm, the participant would be predicted as AF if > 50% of their rPPG segments were determined as AF rhythm by the model. Of the 453 participants (mean age, 69.3 ± 13.0 years, women, 46%), a total of 7320 segments (1969 AF, 1604 NSR & 3747others) were analyzed by DCNN models. The accuracy rate of rPPG with deep learning model for discriminating AF from NSR and other abnormalities was 90.0% and 97.1% in 30-s and 10-min recording, respectively. This contactless, camera-based rPPG technique with a deep-learning model achieved significantly high accuracy to discriminate AF from non-AF and may enable a feasible way for a large-scale screening or monitoring in the future.