Brazilian Archives of Biology and Technology (Sep 2024)
Multimodal Deep Dilated Convolutional Learning for Lung Disease Diagnosis
Abstract
Abstract Accurate and timely identification of pulmonary disease is critical for effective therapeutic intervention. Computed tomography (CT), chest radiography (x-ray) and positron emission tomography (PET) scans are some examples of traditional diagnostic methods that rely on single-modality imaging. However, these methods are not always accurate or useful. This paper presents a novel strategy to overcome this obstacle by developing a multimodal deep learning framework. Current diagnostic techniques mostly prioritize the analysis of a single modality, which limits the holistic understanding of lung diseases. This limitation hinders the accuracy of diagnoses and the ability to tailor therapies to individual patients. To address this disparity, the proposed research presents a novel multimodal deep learning framework that effectively incorporates data from CT, X-ray, and PET scans. This approach allows for the extraction of features that are unique to each modality. Fusion methods, such as late or early fusion, are used to effectively capture synergistic information from multiple modalities. Adding more convolutional neural network (CNN) layers and pooling operations to the model improves the ability to obtain abstract representations. This is followed by the use of fully connected layers for classification purposes. The model is trained using appropriate loss functions and optimized using gradient-based techniques. The proposed methodology shows a significant improvement in the accuracy of lung disease diagnosis compared to conventional methods using a single modality.
Keywords