Inter- and Intra-Observer Agreement When Using a Diagnostic Labeling Scheme for Annotating Findings on Chest X-rays—An Early Step in the Development of a Deep Learning-Based Decision Support System
Dana Li,
Lea Marie Pehrson,
Lea Tøttrup,
Marco Fraccaro,
Rasmus Bonnevie,
Jakob Thrane,
Peter Jagd Sørensen,
Alexander Rykkje,
Tobias Thostrup Andersen,
Henrik Steglich-Arnholm,
Dorte Marianne Rohde Stærk,
Lotte Borgwardt,
Kristoffer Lindskov Hansen,
Sune Darkner,
Jonathan Frederik Carlsen,
Michael Bachmann Nielsen
Affiliations
Dana Li
Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark
Lea Marie Pehrson
Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark
Lea Tøttrup
Unumed Aps, 1055 Copenhagen, Denmark
Marco Fraccaro
Unumed Aps, 1055 Copenhagen, Denmark
Rasmus Bonnevie
Unumed Aps, 1055 Copenhagen, Denmark
Jakob Thrane
Unumed Aps, 1055 Copenhagen, Denmark
Peter Jagd Sørensen
Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark
Alexander Rykkje
Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark
Tobias Thostrup Andersen
Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark
Henrik Steglich-Arnholm
Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark
Dorte Marianne Rohde Stærk
Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark
Lotte Borgwardt
Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark
Kristoffer Lindskov Hansen
Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark
Sune Darkner
Department of Computer Science, University of Copenhagen, 2100 Copenhagen, Denmark
Jonathan Frederik Carlsen
Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark
Michael Bachmann Nielsen
Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark
Consistent annotation of data is a prerequisite for the successful training and testing of artificial intelligence-based decision support systems in radiology. This can be obtained by standardizing terminology when annotating diagnostic images. The purpose of this study was to evaluate the annotation consistency among radiologists when using a novel diagnostic labeling scheme for chest X-rays. Six radiologists with experience ranging from one to sixteen years, annotated a set of 100 fully anonymized chest X-rays. The blinded radiologists annotated on two separate occasions. Statistical analyses were done using Randolph’s kappa and PABAK, and the proportions of specific agreements were calculated. Fair-to-excellent agreement was found for all labels among the annotators (Randolph’s Kappa, 0.40–0.99). The PABAK ranged from 0.12 to 1 for the two-reader inter-rater agreement and 0.26 to 1 for the intra-rater agreement. Descriptive and broad labels achieved the highest proportion of positive agreement in both the inter- and intra-reader analyses. Annotating findings with specific, interpretive labels were found to be difficult for less experienced radiologists. Annotating images with descriptive labels may increase agreement between radiologists with different experience levels compared to annotation with interpretive labels.