Complexity (Jan 2021)

Two-Way Feature Extraction Using Sequential and Multimodal Approach for Hateful Meme Classification

  • Apeksha Aggarwal,
  • Vibhav Sharma,
  • Anshul Trivedi,
  • Mayank Yadav,
  • Chirag Agrawal,
  • Dilbag Singh,
  • Vipul Mishra,
  • Hassène Gritli

DOI
https://doi.org/10.1155/2021/5510253
Journal volume & issue
Vol. 2021

Abstract

Read online

Millions of memes are created and shared every day on social media platforms. Memes are a great tool to spread humour. However, some people use it to target an individual or a group generating offensive content in a polite and sarcastic way. Lack of moderation of such memes spreads hatred and can lead to depression like psychological conditions. Many successful studies related to analysis of language such as sentiment analysis and analysis of images such as image classification have been performed. However, most of these studies rely only upon either one of these components. As classifying meme is one problem which cannot be solved by relying upon only any one of these aspects, the present work identifies, addresses, and ensembles both the aspects for analyzing such data. In this research, we propose a solution to the problems in which the classification depends on more than one model. This paper proposes two different approaches to solve the problem of identifying hate memes. The first approach uses sentiment analysis based on image captioning and text written on the meme. The second approach is to combine features from different modalities. These approaches utilize a combination of glove, encoder-decoder, and OCR with Adamax optimizer deep learning algorithms. Facebook Challenge Hateful Meme Dataset is utilized which contains approximately 8500 meme images. Both the approaches are implemented on the live challenge competition by Facebook and predicted quite acceptable results. Both approaches are tested on the validation dataset, and results are found to be promising for both models.