Remote Sensing (Apr 2018)
Color-Boosted Saliency-Guided Rotation Invariant Bag of Visual Words Representation with Parameter Transfer for Cross-Domain Scene-Level Classification
Abstract
Scene classification on remote sensing imagery is usually based on supervised learning but collecting labelled data in remote sensing domains is expensive and time-consuming. Bag of Visual Words (BOVW) achieves great success in scene classification but there exist problems in domain adaptation tasks, such as the influence of background and the rotation transformation on BOVW representation, and the transfer of SVM parameters from the source domain to the target domain, which may lead to decreased cross-domain scene classification performance. In order to solve the three problems, Color-boosted saliency-guided rotation invariant bag of visual words representation with parameter transfer is proposed for cross-domain scene classification. The global contrast-based salient region detection method is combined with the color-boosted method to increase the accuracy of detected salient regions and reduce the effect of background information on the BOVW representation. The rotation invariant BOVW representation is also proposed by sorting the BOVW representation in each patch in order to decrease the effect of rotation transformation. The several best configurations in the source domain are also applied to the target domain so as to reduce the distribution bias between scenes in the source and target domain. These configurations deliver the top classification performance the optimal parameter in the target domain. The experimental results on two benchmark datasets confirm that the proposed method outperforms most previous methods in scene classification when instances in the target domain are limited. It is also proved that color boosted global contrast-based salient region detection (CBGCSRD) method, rotation invariant BOVW representation, and transfer of SVM parameters from the source to the target domain are all effective in improving the classification accuracy with 2.5%, 3.3%, and 3.1%. These three contributions may increase about 7.5% classification accuracy in total.
Keywords