IEEE Access (Jan 2024)
Joint Federated Learning Using Deep Segmentation and the Gaussian Mixture Model for Breast Cancer Tumors
Abstract
Medical image segmentation is crucial for deep learning (DL) applications in clinical settings. Ensuring accurate segmentation is challenging due to diverse image sources and significant data sharing and privacy concerns in centralized learning setups. To address these challenges, we introduce a novel federated learning (FL) framework tailored for breast cancer. First, we use random regions of interest (ROIs) and bilinear interpolation to determine pixel color intensity based on neighboring pixels, addressing data inconsistencies from heterogeneous distribution parameters and increasing dataset size. We then employ the UNet model with a deep convolutional backbone (Visual Geometry Group [VGG]) to train the augmented data, enhancing recognition during training and testing. Second, we apply the Gaussian Mixture Model (GMM) to improve segmentation quality. This approach effectively manages distinct data distributions across hospitals and highlights images with a higher likelihood of tumor presence. Compared to other segmentation algorithms, GMM enhances the salience of valuable images, improving tumor detection. Finally, extensive experiments in two scenarios, federated averaging (FedAvg) and federated batch normalization (FedBN), demonstrate that our method outperforms several state-of-the-art segmentation methods on five public breast cancer datasets. These findings validate the effectiveness of our proposed framework, promising significant benefits for the community and society.
Keywords