Informatics in Medicine Unlocked (Jan 2023)
Deep learning-based real time detection for cardiac objects with fetal ultrasound video
Abstract
Fetal cardiac anatomical structure interpretation by ultrasound (US) is a key part of prenatal assessment. Unfortunately, the numerous speckles in US video, the small size of fetal cardiac structures, and unfixed fetal positions make manual detection processes difficult. To alleviate such problems, a deep learning model was developed to fully automate the processing of fetal echocardiographs with 2-dimensional cross-sectional images. The weakness of such an approach is that a physician immediately interprets US video with 3-dimensional objects to diagnose patients in clinical practice. To improve fetal cardiac anatomical structure interpretation via US for accurate and real time diagnosis, this paper proposes a real-time fetal cardiac substructure detection using US video with the You Only Look Once (YOLO) framework. In YOLO, an end-to-end neural network makes predictions for cardiac substructure objects, boxes, and class probabilities simultaneously. To achieve reliable performance, 40 fetal echocardiography videos were trained with the new YOLOv7 architecture and fine-tuned to work optimally and run efficiently. We conducted with nine fetal cardiac substructure objects such as the left atrium, right atrium, left ventricle, right ventricle, tricuspid valve, pulmonary valve, mitral valve, aortic valve, and aorta. The results yielded the highest mean average precision of 82.10%, reaching 17 frames per second (FPS) for nine cardiac substructure objects in 0.3 ms. The main finding of our study is that with even a limited number of US videos, YOLOv7 can detect fetal cardiac substructure objects in real-time. Such a network can work efficiently to detect small fetal cardiac objects automatically in a rapid phase with the help of proper fine-tuning. This work mainly assists medical experts in the fetal cardiac anatomy diagnostic process.