Mathematics (Dec 2022)

A Comprehensive Analysis of Transformer-Deep Neural Network Models in Twitter Disaster Detection

  • Vimala Balakrishnan,
  • Zhongliang Shi,
  • Chuan Liang Law,
  • Regine Lim,
  • Lee Leng Teh,
  • Yue Fan,
  • Jeyarani Periasamy

DOI
https://doi.org/10.3390/math10244664
Journal volume & issue
Vol. 10, no. 24
p. 4664

Abstract

Read online

Social media platforms such as Twitter are a vital source of information during major events, such as natural disasters. Studies attempting to automatically detect textual communications have mostly focused on machine learning and deep learning algorithms. Recent evidence shows improvement in disaster detection models with the use of contextual word embedding techniques (i.e., transformers) that take the context of a word into consideration, unlike the traditional context-free techniques; however, studies regarding this model are scant. To this end, this paper investigates a selection of ensemble learning models by merging transformers with deep neural network algorithms to assess their performance in detecting informative and non-informative disaster-related Twitter communications. A total of 7613 tweets were used to train and test the models. Results indicate that the ensemble models consistently yield good performance results, with F-score values ranging between 76% and 80%. Simpler transformer variants, such as ELECTRA and Talking-Heads Attention, yielded comparable and superior results compared to the computationally expensive BERT, with F-scores ranging from 80% to 84%, especially when merged with Bi-LSTM. Our findings show that the newer and simpler transformers can be used effectively, with less computational costs, in detecting disaster-related Twitter communications.

Keywords