Indian Journal of Community Medicine (Jan 2022)
Evaluation of multiple-choice questions by item analysis, from an online internal assessment of 6th semester medical students in a rural medical college, West Bengal
Abstract
Background: Properly constructed single best-answer multiple choice questions (MCQs) or items assess higher-order cognitive processing of Bloom's taxonomy and accurately discriminate between high and low achievers. However, guidelines for writing good test items are rarely followed, leading to generation and application of faulty MCQs. Materials and Methods: During lockdown period in 2020, internal assessment was taken through online mode using Google Forms. There were 60 'single response type' MCQs, each consisting of single stem and four options including one correct answer and three distractors. Each item was analyzed for difficulty index (Dif I), discrimination index (DI), and distractor efficiency (DE). Results: The mean of achieved marks was 42.92± (standard deviation [SD], 5.07). Dif I, DI, and DE were 47.95± (SD 16.39) in percentage, 0.12± (SD 0.10), and 18.42± (SD 15.35), respectively. 46.67% of the items were easy and 21.66% were of acceptable discrimination. Very weak negative correlation was found between Dif I and DI. Out of total 180 distractors, 51.66% were nonfunctional one. Conclusion: Item analysis and storage of MCQs with their indices provides opportunity for an examiner to select MCQs of appropriate difficulty level as per the need of assessment and decide their placement in the question paper.
Keywords