I have 10 groups that took a test at two different points in time. Each group had a different starting mean. i think my null hypothesis is that all groups improved the same amount relative to their first test mean. I am confused about how to actually compare the differences in means across the groups.
Ex:
Test1 means:
Group A: 85
Grp B: 78
Grp C: 79
Test2 means:
Grp A: 87
Grp B: 81
Grp C: 82
Because the full test score data is normally distributed, intuitively I understand this to mean that the difference between an 85 and 87 is bigger than the difference between a 79 and an 82 (i.e. each additional point of improvement above the mean is "harder" than the last point of improvement). How do I quantify this idea so that I can actually compare the differences in improvements across groups?
Ex:
Test1 means:
Group A: 85
Grp B: 78
Grp C: 79
Test2 means:
Grp A: 87
Grp B: 81
Grp C: 82
Because the full test score data is normally distributed, intuitively I understand this to mean that the difference between an 85 and 87 is bigger than the difference between a 79 and an 82 (i.e. each additional point of improvement above the mean is "harder" than the last point of improvement). How do I quantify this idea so that I can actually compare the differences in improvements across groups?