IEEE Access (Jan 2024)
Investigating Catastrophic Forgetting of Deep Learning Models Within Office 31 Dataset
Abstract
Deep learning models have shown impressive performance in various tasks. However, they are prone to a phenomenon called catastrophic forgetting. This means they do not remember what they have learned when training on new tasks. In this research paper, we focus on catastrophic forgetting within the context of the Office 31 dataset. We employ five popular deep learning models: EfficientNet, Inception, MobileNet, ResNet, and Vision Transformer. By training and fine-tuning these models on different combinations of domains within the dataset, we analyze their resistance to catastrophic forgetting. Our research findings reveal significant variations in their performance across models and domains. We found that Vision Transformer has a remarkable resilience to catastrophic forgetting, indicating potential domain similarities. In contrast, Inception is the most effective model for generalizing to the target domain before being finetuned using the target dataset. Furthermore, we observed anomalies in the Office 31 dataset as a benchmark for catastrophic forgetting. Therefore, we employed a different data usage strategy to evaluate the occurrence of catastrophic forgetting. This approach amplifies the prior findings already demonstrated by the original dataset.
Keywords