Factors Affecting Difficulty & Discrimination Indices
Often teachers are stunned after seeing results of an assessment as to how could students have performed so poorly in an examination.
Making fair and systematic evaluations of students’ performance can be a challenging task. Teachers, employers, and others in evaluative positions use a variety of tools to assist them in their evaluations. Tests are tools that are frequently used to facilitate the evaluation process
One-best MCQs are one of the strategies of the assessment tool that quickly assess any level of cognition. Analysis of these MCQs should be done on and off evaluate the quality of items and of the test as a whole. Such analyses can also be employed to revise and improve both items and the test.
Item/MCQ analysis is a valuable, yet relatively simple, procedure performed after the examination that provides information regarding the reliability and validity of a test item. It also tells how difficult or easy the questions were, e.g. through the difficulty index.
This is an index which expresses the proportion or percentage of students who answered the item/MCQ correctly. It is frequently called the p-value.
The larger the percentage getting an item right, the easier the item & the higher the difficulty index. This is why perhaps “difficulty index” should have been named “easiness index”.
The item/MCQ difficulty index is one of the most useful, and most frequently reported, item analyses in statistics.
Item difficulty can range from 0.0 (none of the students answered the item correctly) to 1.0 (all of the students answered the item correctly). Experts recommend that the average level of difficulty for an MCQ item should be between 30% and 70%.
Factors Causing a Low Difficulty Index (A low p-value)
Difficulty index of less than 20-25% means there is obviously something wrong. Following are a list of factors that may cause a low difficulty index (also called p-value) of a MCQ (less than 02.-0.3, i.e. less than 20-30% of students got the MCQ right):
- Learning objective of MCQ is out of course/not in prescribed/recommended textbooks
- Content/Learning objective was not taught in class
- Question was really difficult-beyond their scope of understanding of an undergraduate student
- MCQ was of a higher cognitive level
- Question and or its wording is not clear enough; ambiguous question; wrongly phrased
- Distracters not clear enough; may be more than one ‘most likely’ answer
- Answer in examiners key is wrong or there are two correct answers; these are often picked up when even toppers get the MCQ item wrong.
- Students missed the concerned lecture (physically absent), did not take relevant notes, were mentally absent in the lecture etc.
- Students did not study from recommended textbooks, lecture notes etc. Students often study from short books which do not cover all learning objectives
- Students in general did not come prepared for the exam
- Students caliber in general is weak (e.g. Students of a private Medical College in general compared with students of a Government Medical College)
- Syllabus too extensive and students were unable to cover/revise the whole prescribed course
- Students overburdened with more than one tests/evaluations in one day
- Carefree, casual attitude of students, especially of wealthy parents and/or those who were not even interested in the course and were forced/coerced/Requested by parents to get admission (e.g. in a medical college); also includes female medical students and/or their parents who only wish to add ‘dr.’ as part of the ‘Jahaiz’.
- Leaving it too late: Many students leave it too late before starting to seriously study for the exams especially professional exams. They ‘enjoy’ all the year round and only start serious study in the last 1-2 weeks/months before final exams
- Incompetent teacher: Teacher was unable to deliver the learning objectives wholly and properly:
17. Inability to use unfair means; no leakage of paper etc (unofficial reasons)
These same factors may also be reasons why a SEQ (Short Essay Question) was found to have a low difficulty index with following additions:
18. Excessive strictness in paper checking
19. Wrong/negligent paper checking. Right answers inadvertently marked wrong.
This may happen when an examiner has a huge bundle of papers (is given or taken the huge responsibility for maximum monetary benefits) to check in limited given time.
Or if the examiner is incompetent (not competent enough) to check the papers
There are other item analyses besides the difficulty index. For example the discrimination index; this index of discrimination is simply the difference between the percentage of high achieving students who got an item right and the percentage of low achieving students who got the item right.
In simpler terms, ability of an item to discriminate between the top and bottom students who took the exam determines the discrimination index.
The discrimination index is obtained by subtracting the number of students in the lower group who got the item correct (worst performance) (lower 25 %) from the number of students in the upper group who got the item correct (upper 25 %). Then divide by the number of students in each group.
The possible range of the discrimination index is -1.0 to 1.0; ideally the value should be 0.2 or higher. However, if an item has discrimination below 0.0, it suggests a problem.
When an item is discriminating negatively, overall the most knowledgeable students are getting the item wrong and the least knowledgeable students are getting the item right.
Causes of low/negative discrimination index
- Poor/very low caliber set/subset of students
- The item was so easy that it couldn’t discriminate correctly
- The item was so difficult that it couldn’t discriminate correctly
- A negative discrimination index may indicate that the item is measuring something other than what the rest of the test is measuring.
- It may be a sign that the item has been mis-keyed (wrong key) or double keyed (two correct answers)
- Too much use of unfair means in the exam, so much so that it couldn’t discriminate easily
- Some of the factors mentioned above for difficulty index (e.g. item too ambiguous, wrongly phrased, beyond scope etc) as item discrimination is greatly influenced by item difficulty
It can be seen that all of these except for point 2 & 6 are causes of items having a low difficulty index as well as a low/negative discrimination index. It is to be noted that point number 1 is often the major reason for poor item analysis in private medical colleges in Pakistan, especially where admission in medical colleges on merit has not been taken into that much consideration.
Item analysis data are tentative. Such data are influenced by the type and number of students being tested, instructional procedures employed, and chance errors. Difficulty indices & discrimination indices should improve/change upon repeated/frequent assessment of subject knowledge.
If repeated use of items is possible, statistics should be recorded for each administration of each item.
(This article was published in Islam Medical & Dental College’s (Sialkot, Pakistan) first college magazine, with some additions brought here)
For many more MCQs (4500 MCQs) divided into units, chapters, and topics read:
‘Learning Pharmacology from Nauman’s MCQS’