Diagnostics (Oct 2022)

Inter/Intra-Observer Agreement in Video-Capsule Endoscopy: Are We Getting It All Wrong? A Systematic Review and Meta-Analysis

  • Pablo Cortegoso Valdivia,
  • Ulrik Deding,
  • Thomas Bjørsum-Meyer,
  • Gunnar Baatrup,
  • Ignacio Fernández-Urién,
  • Xavier Dray,
  • Pedro Boal-Carvalho,
  • Pierre Ellul,
  • Ervin Toth,
  • Emanuele Rondonotti,
  • Lasse Kaalby,
  • Marco Pennazio,
  • Anastasios Koulaouzidis

DOI
https://doi.org/10.3390/diagnostics12102400
Journal volume & issue
Vol. 12, no. 10
p. 2400

Abstract

Read online

Video-capsule endoscopy (VCE) reading is a time- and energy-consuming task. Agreement on findings between readers (either different or the same) is a crucial point for increasing performance and providing valid reports. The aim of this systematic review with meta-analysis is to provide an evaluation of inter/intra-observer agreement in VCE reading. A systematic literature search in PubMed, Embase and Web of Science was performed throughout September 2022. The degree of observer agreement, expressed with different test statistics, was extracted. As different statistics are not directly comparable, our analyses were stratified by type of test statistics, dividing them in groups of “None/Poor/Minimal”, “Moderate/Weak/Fair”, “Good/Excellent/Strong” and “Perfect/Almost perfect” to report the proportions of each. In total, 60 studies were included in the analysis, with a total of 579 comparisons. The quality of included studies, assessed with the MINORS score, was sufficient in 52/60 studies. The most common test statistics were the Kappa statistics for categorical outcomes (424 comparisons) and the intra-class correlation coefficient (ICC) for continuous outcomes (73 comparisons). In the overall comparison of inter-observer agreement, only 23% were evaluated as “good” or “perfect”; for intra-observer agreement, this was the case in 36%. Sources of heterogeneity (high, I2 81.8–98.1%) were investigated with meta-regressions, showing a possible role of country, capsule type and year of publication in Kappa inter-observer agreement. VCE reading suffers from substantial heterogeneity and sub-optimal agreement in both inter- and intra-observer evaluation. Artificial-intelligence-based tools and the adoption of a unified terminology may progressively enhance levels of agreement in VCE reading.

Keywords