IET Computer Vision (Apr 2021)

Entropy information‐based heterogeneous deep selective fused features using deep convolutional neural network for sketch recognition

  • Shaukat Hayat,
  • She Kun,
  • Sara Shahzad,
  • Parinya Suwansrikham,
  • Muhammad Mateen,
  • Yao Yu

DOI
https://doi.org/10.1049/cvi2.12019
Journal volume & issue
Vol. 15, no. 3
pp. 165 – 180

Abstract

Read online

Abstract An effective feature representation can boost recognition tasks in the sketch domain. Due to an abstract and diverse structure of the sketch relatively with a natural image, it is complex to generate a discriminative features representation for sketch recognition. Accordingly, this article presents a novel scheme for sketch recognition. It generates a discriminative features representation as a result of integrating asymmetry essential information from deep features. This information is kept as an original feature‐vector space for making a final decision. Specifically, five different well‐known pre‐trained deep convolutional neural networks (DCNNs), namely, AlexNet, VGGNet‐19, Inception V3, Xception, and InceptionResNetV2 are fine‐tuned and utilised for feature extraction. First, the high‐level deep layers of the networks were used to get multi‐features hierarchy from sketch images. Second, an entropy‐based neighbourhood component analysis was employed to optimise the fusion of features in order of rank from multiple different layers of various deep networks. Finally, the ranked features vector space was fed into the support vector machine (SVM) classifier for sketch classification outcomes. The performance of the proposed scheme is evaluated on two different sketch datasets such as TU‐Berlin and Sketchy for classification and retrieval tasks. Experimental outcomes demonstrate that the proposed scheme brings substantial improvement over human recognition accuracy and other state‐of‐the‐art algorithms.