PeerJ (Apr 2020)

Data-driven classification of the certainty of scholarly assertions

  • Mario Prieto,
  • Helena Deus,
  • Anita de Waard,
  • Erik Schultes,
  • Beatriz García-Jiménez,
  • Mark D. Wilkinson

DOI
https://doi.org/10.7717/peerj.8871
Journal volume & issue
Vol. 8
p. e8871

Abstract

Read online Read online

The grammatical structures scholars use to express their assertions are intended to convey various degrees of certainty or speculation. Prior studies have suggested a variety of categorization systems for scholarly certainty; however, these have not been objectively tested for their validity, particularly with respect to representing the interpretation by the reader, rather than the intention of the author. In this study, we use a series of questionnaires to determine how researchers classify various scholarly assertions, using three distinct certainty classification systems. We find that there are three distinct categories of certainty along a spectrum from high to low. We show that these categories can be detected in an automated manner, using a machine learning model, with a cross-validation accuracy of 89.2% relative to an author-annotated corpus, and 82.2% accuracy against a publicly-annotated corpus. This finding provides an opportunity for contextual metadata related to certainty to be captured as a part of text-mining pipelines, which currently miss these subtle linguistic cues. We provide an exemplar machine-accessible representation—a Nanopublication—where certainty category is embedded as metadata in a formal, ontology-based manner within text-mined scholarly assertions.

Keywords