Item Statistics provide information on an item's use and past student performance on the item. This information aims to help teachers evaluate the quality of an item based on how students performed. Item statistics can be accessed while in the Item Bank module as well as during test creation in the Assessment module. Item Statistics are only available if the item has been utilized in an assessment that has statistics calculated.
- Data View: View Summary Level Data, View Item Details,
- System Admin: View Statistics Data
Accessing Item Statistics in Assessment Creation
When selecting items for an assessment, the Item Statistics icon appears at the far right of each item area (if statistics have been calculated for the item).
Accessing Item Statistics from an Item Bank
When browsing items in an Item bank, the Item Statistics icon appears at the far right of each item area (if statistics have been calculated for the item).
Viewing Item Statistics
- Use the dropdown to select an Assessment Year to view statistics data for, then select Refresh.
- Item Statistics display in for each assessment the item was used on, starting with the most recent assessment at the top. Different statistics are displayed depending on the item type.
- Use the + button to open the Item Statistics for that Assessment.
- Item Properties display on the right.
Defining Item Statistics
- Used in Assessment: The name of the test the item was used in.
- Year: Year the assessment was administered.
- Statistic Descriptors:
- X (Mean): Average score
- SD (Standard Deviation): the amount of variation of scores, relative to the mean
- MD (Median): The median is the middle value of the student scores on the assessment
- Cronbach Alpha: A measure of internal consistency; how closely related a set of items are as a group
- SEM (Standard Error of Measurement): A measure of the precision of the assessment
- Updated: The date the last time the item was Updated.
- Average Time Spent: Average time it took a student to compete the item.
Classic Test Theory
- Sample Size: The number of student scores that were used to calculate the statistics (students who took the assessment with the item).
- P-Value: A measure of item difficulty; ranges between 0.0 and 1.0; the higher the value, the greater proportion of students answered correctly, thus an easier item.
- Pt. Biserial: A measure of item reliability; ranges between -1.0 and +1.0; correlates student scores on one item with scores on the test as a whole; the closer to 1.0, the more reliable an item is because it discriminates well among students who mastered test material and those who didn't.
- Discrimination Index:
- Upper 27%: The group of students in the top 27% of the score distribution
- Lower 27%: The group of students in the bottom 27% of the score distribution
- Frequency Distribution: The percent of students who selected each answer.
- Choice Statistics: The percent of students who answer correct, incorrect, or did not select an answer.
Item Response Theory:
- Sample Size - the number of student scores that were used to calculate the statistics (students who took the assessment with the item).
- 2PLM (a)- Parameter a of the IRT two-parameter logistic model describes how well the item differentiates student ability
- 2PLM (b)- Parameter b of the IRT two-parameter logistic model describes the difficulty of the item
- 3PLM (a)- Parameter a of the IRT three-parameter logistic model describes how well the item differentiates student ability
- 3PLM (b)- Parameter b of the IRT three-parameter logistic model describes the difficulty of the item
- 3PLM (c)- Parameter c of the IRT three-parameter logistic model describes the probability of students guessing the correct answer for the item