IEEE Access (Jan 2025)
Multimodal Autism Spectrum Disorder Method Using GCN With Dual Transformers
Abstract
Autism spectrum disorder (ASD) presents diagnostic challenges due to its heterogeneous nature, necessitating advanced methods for accurate identification. This study proposes a novel diagnostic approach that integrates Graph convolutional networks (GCN) with dual transformer architectures, optimized through a Co-training strategy. The first Transformer is dedicated to extracting intricate temporal features from fMRI data, which are crucial for understanding brain activity over time. The second Transformer is employed to enhance the fusion of these temporal features with spatial features learned by the GCN, effectively combining both dimensions of the neuroimaging data. Co-training is introduced to simultaneously harness both functional magnetic resonance imaging and structural magnetic resonance imaging data, improving the model’s capacity to generalize across different datasets. This comprehensive method was rigorously evaluated using the ABIDE-I and ABIDE-II datasets, with performance assessed through nested ten-fold cross-validation and leave-one-out cross-validation. The experimental results reveal that our approach significantly outperforms existing baseline and state-of-the-art models. The method achieves 79.47% of accuracy, 78.97% precision, 82.11% recall, and 0.85 of AUC metrics. These findings highlight the robustness of combining dual transformer models with GCNs and Co-training, providing a powerful framework for ASD classification.
Keywords