Cogent Education (Jan 2017)
Using reliability and item analysis to evaluate a teacher-developed test in educational measurement and evaluation
Abstract
Item analysis is essential in improving items which will be used again in later tests; it can also be used to eliminate misleading items in a test. The study focused on item and test quality and explored the relationship between difficulty index (p-value) and discrimination index (DI) with distractor efficiency (DE). The study was conducted among 247 first-year students pursuing Diploma in Education at Cape Coast Polytechnic. Fifty multiple-choice questions were administered as an end of semester examination in Educational Measurement course. Internal consistency reliability of the test was 0.77 using Kuder–Richardson 20 coefficient (KR-20). The mean score was 29.23 with a standard deviation of 6.36. Mean difficulty index (p) value and DI were 58.46% (SD 21.23%) and 0.22 (SD 0.17), respectively. DI was noted to be a maximum at a p-value range between 40 and 60%. Mean DE was 55.04% (SD 24.09%). Items having average difficulty and high discriminating power with functional distractors should be integrated into future tests to improve the quality of the assessment. Using DI, it was observed that 30 (60%) of the test items fell into the reasonably good or acceptable value ranges.
Keywords