Jisuanji kexue (Sep 2021)

Research on Urban Function Recognition Based on Multi-modal and Multi-level Data Fusion Method

  • ZHOU Xin-min, HU Yi-gui, LIU Wen-jie, SUN Rong-jun

DOI
https://doi.org/10.11896/jsjkx.210500220
Journal volume & issue
Vol. 48, no. 9
pp. 50 – 58

Abstract

Read online

The division and identification of urban functional areas is of great significance for analyzing the distribution status of urban functional areas and understanding the internal spatial structure of cities.This has stimulated the demand for multi-source geospatial data fusion,especially the fusion of urban remote sensing data and social sensing data.However,how to realize the fusion of urban remote sensing and social sensing data is a technical problem effectively.In order to realize the fusion of urban remote sensing and social sensing data and improve the accuracy of urban function recognition,taking remote sensing images and social sensing data as examples,introducing a multi-modal data fusion mechanism,and proposing a joint deep learning and ensemble learning model to infer urban regional functions.The model uses DenseNet and DPN network to extract urban remote sensing image features and social sensing features from multi-source geospatial data,and carries out multi-level data fusion of feature fusion,decision fusion and hybrid fusion to identify urban functions.The proposed model is verified on the URFC dataset,and these three evaluation index values of hybrid fusion overall classification accuracy,Kappa coefficient and average F1 are 74.29%,0.67,71.92%,respectively.Compared with the best classification method of single modal data,the three evaluation indexes of the proposed fusion model are increased by 18.83%,0.24,35.46% respectively.The experimental results show that the data fusion model has better classification performance,so that it can effectively fuse remote sensing image data and social sensing data,and realize the accurate identification of urban regional functions.

Keywords