International Review of Research in Open and Distributed Learning (Jul 2005)

Identifying Sources of Difference in Reliability in Content Analysis

  • Elizabeth Murphy,
  • Justyna Ciszewska-Carr

DOI
https://doi.org/10.19173/irrodl.v6i2.233
Journal volume & issue
Vol. 6, no. 2

Abstract

Read online

This paper reports on a case study which identifies and illustrates sources of difference in agreement in relation to reliability in a context of quantitative content analysis of a transcript of an online asynchronous discussion (OAD). Transcripts of 10 students in a month-long online asynchronous discussion were coded by two coders using an instrument with two categories, five processes, and 19 indicators of Problem Formulation and Resolution (PFR). Sources of difference were identified in relation to: coders; tasks; and students. Reliability values were calculated at the levels of categories, processes, and indicators. At the most detailed level of coding on the basis of the indicator, findings revealed that the overall level of reliability between coders was .591 when measured with Cohen’s kappa. The difference between tasks at the same level ranged from .349 to .664, and the difference between participants ranged from .390 to .907. Implications for training and research are discussed. Keywords: content analysis; online discussions; reliability; Cohen's kappa; sources of difference; coding