Figuring student improvement fairly

dwhite00

New member
Joined
Feb 26, 2012
Messages
3
I am a teacher and I am trying to compute student improvement on a standardized test. The problem is best illustrated through an example. I have a student who had a 90% on a test last year, and now has a 99% - a 10% improvement. Another student scored 20 on the same test last year, and this year scores 24 - a 20% improvement. I believe intuitively that the student who had a 90 now achieving a 99 is more improved than the student who had a 20 and moved up to a 24. Yet the 24 student is showing twice as much improvement, based on percentage. Is there a more fair way to calculate improvement, or is my intuition simply wrong?
 
I am a teacher and I am trying to compute student improvement on a standardized test. The problem is best illustrated through an example. I have a student who had a 90% on a test last year, and now has a 99% - a 10% improvement. Another student scored 20 on the same test last year, and this year scores 24 - a 20% improvement. I believe intuitively that the student who had a 90 now achieving a 99 is more improved than the student who had a 20 and moved up to a 24. Yet the 24 student is showing twice as much improvement, based on percentage. Is there a more fair way to calculate improvement, or is my intuition simply wrong?

This is a case, I think you need to consider - "room to improve".

In the first case (90 → 99→ \(\displaystyle \frac{9}{100-90}\)) student used 90% of the room available.

In the second case (20 → 24 → \(\displaystyle \frac{4}{100-20}\)) stdent used 5% of the room available
 
Thanks for the answers. The percentage points seems more reasonable - though less elegant - to me. While it seems that the person who went from 90-99 improved more than the person who went from a 20-24, if I were grading based on improvement, could I really say that the 99 student improved 18 times more than the 24 student? What I'm really after here is a way of figuring a fair grade based on improvement, rather than on achievement, and it seems that the simple percentage point method makes more sense. However, it still has the problem of only allowing small possible scores to anyone who did exceptionally well the first time through. For example, a student going from a 95 to 100, as opposed to a student going from 20-30, in which case the 30 student improved twice as much as the 100 student. It seems to me that those final 5 points are much more difficult to achieve than the 10 points to get from 20 to 30.
This problem also applies to current teacher evaluation systems that are being proposed based on how much students improve under a specific teacher's instruction. A teacher who has high performing students who are still at the same level of performance in the next year will be rated lower than a teacher who has low performing students who achieve slightly higher in a subsequent year. Clearly it is unfair to make this determination based on percentage points, but what's the alternative?
Thanks again to both of you for taking the time to work with me on this.
 
I think that it will be most beneficial if you have a metric going on. I mean, it would serve as a basis to judge whether the students are improving or not. Getting a high grade on the first test isn't always good. Especially when you see the decline with the succeeding results.

From a statistical standpoint, it is a lot better to see a lower score to begin with but see a trend with it improving.
 
Top