Background: Multiple choice questions (MCQs) are used as an objective and reliable tool of assessment. Item analysis is a data analytic process which examines the students’ response to an item (single MCQ). We observed that it is seldom done routinely. Instead, faculty relies on their perception of difficulty level of MCQs. There are very few studies which correlate the perception of difficulty level of MCQs by faculty with the objective difficulty index (DIF I) by item analysis.
Aim and Objectives: The objectives of the study are as follows: (1) To calculate the difficulty index (DIF I), discrimination index (d value) and distractor efficiency (DE) of MCQs. (2) To correlate faculty perceptions of difficulty level of MCQs with DIF I of MCQs. (3) To find out the knowledge, attitude, and practice (KAP) of item analysis by institutes routinely and the reasons for same.
Materials and Methods: Sixty-four single best response type MCQs answered by 120 students were used for analysis. Post-validation of the paper was done by item analysis. The MCQs were sent to faculty members by email to rate their perceived difficulty level on a three-point scale. Pre-validated questionnaire about KAP of item analysis was also sent to 71 faculty in 23 institutes.
Results: Data were analyzed statistically. Spearman’s correlation for correlation between the median perceived difficulty level by faculty and DIF I was not significant (r2 = 0.0028). DIF I (mean ± SD), 58.24 ± 19.85, d value (mean ± SD) 0.25 ± 0.14. DE was 100% for 31 items.
Conclusion: Although many faculty are aware of the process of item analysis, it is rarely used. The faculty rely on perceived the difficulty level. However, it is statistically not significant. Item analysis is a must for improving the quality of question bank and for standardization of assessment.
Key words: Item Analysis; Difficulty Index; Discrimination Index; Distractor Efficiency
|