PLoS ONE (Jan 2022)
Saliency guided data augmentation strategy for maximally utilizing an object’s visual information
Abstract
Among the various types of data augmentation strategies, the mixup-based approach has been particularly studied. However, in existing mixup-based approaches, object loss and label mismatching can occur if random patches are utilized when constructing augmented images, and additionally, patches that do not contain objects might be included, which degrades performance. In this paper, we propose a novel augmentation method that mixes patches in a non-overlapping manner after they are extracted from the salient regions in an image. The suggested method can make effective use of object characteristics, because the constructed image consists only of visually important regions and is robust to noise. Since the patches do not occlude each other, the semantically meaningful information in the salient regions can be fully utilized. Additionally, our method is more robust to adversarial attack than the conventional augmentation method. In the experimental results, when Wide ResNet was trained on the public datasets, CIFAR-10, CIFAR-100 and STL-10, the top-1 accuracy was 97.26%, 83.99% and 82.40% respectively, which surpasses other augmentation methods.