Journal of Big Data (Feb 2022)

Task-agnostic representation learning of multimodal twitter data for downstream applications

  • Ryan Rivas,
  • Sudipta Paul,
  • Vagelis Hristidis,
  • Evangelos E. Papalexakis,
  • Amit K. Roy-Chowdhury

DOI
https://doi.org/10.1186/s40537-022-00570-x
Journal volume & issue
Vol. 9, no. 1
pp. 1 – 19

Abstract

Read online

Abstract Twitter is a frequent target for machine learning research and applications. Many problems, such as sentiment analysis, image tagging, and location prediction have been studied on Twitter data. Much of the prior work that addresses these problems within the context of Twitter focuses on a subset of the types of data available, e.g. only text, or text and image. However, a tweet can have several additional components, such as the location and the author, that can also provide useful information for machine learning tasks. In this work, we explore the problem of jointly modeling several tweet components in a common embedding space via task-agnostic representation learning, which can then be used to tackle various machine learning applications. To address this problem, we propose a deep neural network framework that combines text, image, and graph representations to learn joint embeddings for 5 tweet components: body, hashtags, images, user, and location. In our experiments, we use a large dataset of tweets to learn a joint embedding model and use it in multiple tasks to evaluate its performance vs. state-of-the-art baselines specific to each task. Our results show that our proposed generic method has similar or superior performance to specialized application-specific approaches, including accuracy of 52.43% vs. 48.88% for location prediction and recall of up to 15.93% vs. 12.12% for hashtag recommendation.

Keywords