Progress-Over-Time Reports with Chi Square

Progress-Over-Time reports are used to answer the question, “Are our students learning?” To investigate progress over time, we can look at an Outcome Set with two different date ranges to determine if the scoring distribution has changed over time. To do this, we want to add a second time interval.

In a Progress-Over-Time report, two Point-in-Time reports are compared to examine progress in learning over time. We expect the mean score for the first dataset (pre-scores) be less than the mean score for the second dataset (post-scores).

If you have sufficient granularity in your performance levels, you can use the Chi Square test to see whether the difference in the two scoring distributions is significant or could have just happened randomly. Scoring Criteria with less than six performance levels are not granular enough to yield reliable correlations. If you have less than six performance levels, consider revamping your default levels to increase the number to at least six. Use the Chi Square test to make comparisons between two or more scoring distributions.

Step 1. Select Advanced Filters Option

1. Click the Advanced Filters option.

Step 2: Select Report Scope Options

Image

2. Use the List By drop-down menu to select any option.

3. Use the Statistics drop-down menu to select Performance Level (Counts). If you select any other option, you will not be able to run the Chi Square test comparing time intervals.

Step 3: Select Time Intervals

3. Click into the date fields to select the first time interval for your report.  

4. Click Apply.

Step 4: Add Comparison Intervals

5. Click on the + icon to reveal a second set of date range fields. Click into the date fields to select the subsequent time interval for your report. Repeat until you have all desired time intervals selected.

Step 5: Select Filter Options

All Filter Options are optional. Select only those that are desired for your specific report.

6. Click the Choose Instrument button to select the Assessment Instrument. This button may also be labelled Choose Outcome if you are running the report by Outcome Set.

7. Use the Table of Contents menu to select a specific Table of Contents and section(s) to use to filter the assessments in your report.

8. If you wish to filter the assessments by responses that you have collected on one of your Forms, you can choose to either "Filter by select individual responses" or "Compare responses for a single question". You will then be prompted to select the Form and the question(s) you wish to use for filtering or comparing.

9. If you wish to filter the assessments by the department or group to which the students belong, use the Choose button to make this selection.

10. If you wish to filter the assessments by Assessor, you can choose to "Filter by Assessor Department/Group", "Filter by Individual Assessor", or "Filter by Department Contact List".

11. The Assessment Volume option will allow you to select a minimum number of assessments that must exists for the select Assessment Instrument or Outcome Set in order to be included in your report.

Step 6: Select Calculation & Output Options

12. Calculation Options

  • In Case of Multiple Submissions: If more than one submission was made and scored, this option allows you to select to include all submissions, average student submission scores, use the latest student submission score, or use the earliest student submission score.
  • Filter by Date: This option allows you to select to filter by the date the submissions were made or the date the submissions were assessed.
  • Statistics Mode: This option allows you to select to display population or sample statistics.
  • Calculate Rubric Means Using: You have the option to calculate instrument means using Criterion scores or Overall scores.
  • Reliability Assessment Scores: If you have performed reliability tests using this assessment instrument you can select to include or exclude the reliability assessment scores.
  • Held Scores: This option allows you to decide whether or not you would like to include held scores in your report.
  • Resubmission Scores: This option allows you to choose whether you would like to include or exclude resubmission scores in your report.

13. Output Options

Select the output options you wish to include in your report by clicking on the associated checkboxes.

14. Pegging Scheme

If would like to apply a pegging scheme to the assessment instrument used for this report, use the drop-down menu to select the desired pegging scheme.

Step 7: Generate Report

15. Click on the Generate Report button.

Step 8: Run-Chi Square Simulator

Run Chi Square Test

Once you have generated the table report,

16. Click on the Chi Square Simulator button.

The simulator allows you to select the distribution score columns that you want to compare. This is important because you cannot use cell counts of less than 5.

If all cell counts are greater than 5 and you wish to compare all columns, then you can skip this step and click the Chi-Square Test button right away.

17. Click and drag your cursor over the cells in the row that you wish to compare.

Select only the performance level columns and for one time interval at a time. Repeat for additional time intervals.

A table will appear with the selections that you have made.

18. Click the Calculate Chi-Square button.

Step 9: Review the Chi-Square Probability Chart

In the Chi Square Probability Chart, look at the appropriate Degrees of Freedom (DF) row and the .05 alpha level column in the Probability chart. This represents an acceptable level of probability for the type of data we are using and a small to medium sample size.

Compare the critical probability value for 6 DF at the .05 alpha level with the calculated Chi Square value. If the calculated value is greater than the value in the chart, then the difference between the two scoring distributions is significant. This can be interpreted as meaning that the students did not learn what they know simply by chance. It does NOT necessarily mean that they learned anything because of your instructional program. But, if your student sample is broad and heterogeneous enough, significant learning progress can indicate that you are reaching learning goals. This information supports system validity, and you have come a long way from believing that ANY number generated by digital assessment software is necessarily true.