Journal of University College of Medicine and Dentistry (May 2024)

Expert Prediction Versus Difficulty Index Measured by Psychometric Analysis; A Mixed Method Study Interpreted through Diagnostic Judgment by Cognitive Modeling Framework

  • Memoona Mansoor,
  • Shazia Imran,
  • Ali Tayyab,
  • Rehmah Sarfraz

DOI
https://doi.org/10.51846/jucmd.v3i2.3047
Journal volume & issue
Vol. 3, no. 2

Abstract

Read online

Objective: The item difficulty is determined in two ways; one relies on expert judgments, and the other on psychometric analysis. This study compared item developers' perceptions of item difficulty with psychometric analysis results and explored their thought processes in categorizing items. Methodology: This explanatory sequential mixed method study was conducted from October to December in 2022 in three phases (quantitative, qualitative, and mixed method strand). Difficulty ranking of items by 20 subject experts, for all the preclinical years' end-of-module exams was compared with that obtained by psychometric analysis from the OMR (Optical Mark Reader). Cohen’s Kappa was used to check the agreement and Pearson’s correlation was used to infer the correlation between the two measures (item writers’ perception of item difficulty and Rightmark analysis). All the item developers (20) were interviewed through an open-ended two-item questionnaire. Interviews were recorded and transcribed. Themes and subthemes were identified from interview data through manual coding. The anonymity of the participants was maintained. Results: A total of 1150 items from Anatomy, Physiology, Biochemistry, Pharmacology, Pathology & Forensic Medicine were compared. These items were developed by 20 content experts. There was a weak positive (r=0.11) but significant correlation (p=0.00) between faculty perception and Right mark analysis of the item difficulty. However, there was no agreement between the two measurements (Cohen’s Kappa k=0.042, p=0.027). The interviews of item developers identified four major themes: academic performance, learning habits, the content targeted, and the item's construction. Conclusion: Experts consider contextual factors which cover content and student background, when ranking items, while psychometric analysis is based on item performance data. Thus, contextual nuances may lead to differences in judgment.

Keywords