- Measuring Growth
- Reports
- Additional Resources
- Admin Help
- General Help
Predictive Model
Assessments analyzed with the Predictive model can be used with any assessment that has sufficient prior testing. This prior testing, or predictors, can include assessments in the same or different subjects.
This model generates an expected performance for each student. Expected achievement reflects students' achievement before the current school year or when they entered a grade and subject or course.
An expected performance is the score the student would make on the selected assessment if the student makes average or typical growth. To generate each student's expected performance we build a robust statistical model of all students who took the selected assessment in the most recent year. The model includes the scores of all students in the state, along with their testing histories across years, grades, and subjects.
By considering how all other students performed on the assessment in relation to their testing histories, the model calculates an expected performance for each student based on their individual testing history.
To ensure precision in the expected performances, a student must have at least three prior assessment scores. This does not mean three years of scores or three scores in the same subject, but simply three prior scores across grades and subjects.
Let's consider an example. Zachary is a high-achieving student who has scored well on state assessments for the past few years, especially in math. To predict Zachary's score on the assessment, we:
- Determine the relationships between the testing histories of all students and their actual performance on this assessment in the same year.
- Use these relationships to determine what the expected score would be for Zachary, given his own personal testing history.
Based on Zachary's testing history, a score at the 83rd percentile would be a reasonable expectation for him.
In contrast, Adam is a low-achieving student who has struggled in math. Their prior scores on state assessments are low. Just as with Zachary, we use the relationships between the testing histories of all students and their actual performance on the assessment statewide to determine an expected performance for Adam. Based upon Adam's own personal testing history, a score at the 26th percentile would be a reasonable expectation for him.
Once an expected performance has been generated for each student in the group, the expected performances are averaged. Because this average expected performance is based on the students' prior test scores, it represents the expected achievement in this subject for the group of students.
Next, we compare the students' actual performance on the assessment to their expected performance. If a group of students scores what they were expected to score, on average, we can say that the group made average, or typical, growth. In other words, their growth was similar to the growth of students at the same achievement level across the state. This is the definition of meeting expected growth in the predictive model.
If a group of students scores significantly higher than expected, we can conclude that the group made more growth than their peers across the state. If a group scores significantly lower than expected, the group did not grow as much as their peers.
The growth measure is a function of the difference between the students' expected performance and their actual performance. This value is expressed in scale score points and indicates how much higher or lower the group scored, on average, compared to what they were expected to score given their individual testing histories. For example, a growth measure of 9.3 indicates that, on average, this group of students scored 9.3 scale score points higher than expected. When generating growth measures for Teacher Value-Added reports, students are weighted for each teacher based on the proportion of instructional responsibility claimed in the data submitted for analysis.