IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (Jan 2020)
Retrieval Topic Recurrent Memory Network for Remote Sensing Image Captioning
Abstract
Remote sensing image (RSI) captioning aims to generate sentences to describe the content of RSIs. Generally, five sentences are used to describe the RSI in caption datasets. Every sentence can just focus on part of images' contents due to the different attention parts of annotation persons. One annotated sentence may be ambiguous compared with other four sentences. However, previous methods, treating five sentences separately, may generate an ambiguous sentence. In order to consider five sentences together, a collection of words, which named topic words contained common information among five sentences, is jointly incorporated into a captioning model to generate a determinate sentence that covers common contents in RSIs. Instead of employing a naive recurrent neural network, a memory network in which topic words can be naturally included as memory cells is introduced to generate sentences. A novel retrieval topic recurrent memory network is proposed to utilize the topic words. First, a topic repository is built to record the topic words in training datasets. Then, the retrieval strategy is exploited to obtain the topic words for a test image from topic repository. Finally, the retrieved topic words are incorporated into a recurrent memory network to guide the sentence generation. In addition to getting topics through retrieval, the topic words of test images can also be edited manually. The proposed method sheds light on controllability of caption generation. Experiments are conducted on two caption datasets to evaluate the proposed method.
Keywords