Visual Informatics (Dec 2018)

Versus—A tool for evaluating visualizations and image quality using a 2AFC methodology

  • Jenny Vuong,
  • Sandeep Kaur,
  • Julian Heinrich,
  • Bosco K. Ho,
  • Christopher J. Hammang,
  • Benedetta F. Baldi,
  • Seán I. O’Donoghue

Journal volume & issue
Vol. 2, no. 4
pp. 225 – 234

Abstract

Read online

Novel visualization methods and strategies are necessary to cope with the deluge of datasets present in any scientific field to make discoveries and find answers to previously unanswered questions. These methods and strategies should not only present scientific findings as images in a concise way but also need to be effective and expressive, which often remain untested. Here, we present Versus, a tool to enable easy image quality assessment and image ranking, utilizing a two-alternative forced choice methodology (2AFC) and an efficient ranking algorithm based on a binary search. The tool provides a systematic way of setting up evaluation experiments via the web without the necessity to install any additional software or require any programming skills. Furthermore, Versus can easily interface with crowdsourcing platforms, such as Amazon’s Mechanical Turk, or can be used as a stand-alone system to carry out evaluations with experts. We demonstrate the use of Versus by means of an image evaluation study, aiming to determine if hue, saturation, brightness, and texture are good indicators of uncertainty in three-dimensional protein structures. Drawing from the power of crowdsourcing, we argue that there is demand and also great potential for this tool to become a standard for simple and fast image evaluations, with the aim to test the effectiveness and expressiveness of scientific visualizations. Keywords: Evaluation, Visualization, Visual analytics, Image comparison, Crowdsourcing, Evaluation methods, 2AFC, Image evaluation, Tool, Visualization evaluation