Confusion concerning repeated measures and t-test. Would love input.

#1
Hi guys

So Im doing my dissertation.

I did a study with two groups, a control and intervention group. They were both tested at baseline and follow up.

I did repeated measures in SPSS which revealed no significant differences between groups. I did paired t-tests post hoc. The paired t-tests showed that the control group got significantly better form baseline to follow up. The intervention group did not get significantly better.

My questions are two-fold:

1. Why did the control group show significant improvement when doing t-tests but that was not detected with the repeated measures when comparing groups?
2. My supervisor mentioned that t-tests were not appropriate for post hoc testing. I simply do not understand this and would want to know which tests are appropriate post hoc following repeated measures?

My apologies if any of this is unclear, I am still new to statistics and am trying to gain a grasp of it. If necessary ask away and I will answer to the best of my capability.

Input will be grately appriciated.

Have a nice evening.

Tobias
 

katxt

Active Member
#2
I guess that you did a 2 way repeated measures anova group x time. Did you include the interaction?
If you found no sig difference then that is the end of it. You shouldn't really do anything else. Post hoc tests are to test between different levels of a factor after some significant difference is found, but because you have only two levels for each factor, post hoc tests are not needed. If you have several levels, then paired t tests would be appropriate with some adjustment of the significance level. It is quite possible to get post hoc p values less than 0.05 which don't prove significance.

For a simple method try a paired t test between the groups using the after-before differences.
 
#3
Cheers


Yes I included the interaction.

But is there any way I can statistically underpin that the control group got better?

What do you mean by using paired t-tests? That is what I did.

Thanks again...

Best wishes
Tobias
 

Karabiner

TS Contributor
#4
I am not sure exactely which problem you want to solve.

You have 2 groups with pre-post measures. Was allocation to groups randomized?
Were the baseline means of the dependent variable roughly equal in both groups?
And how large were your sample sizes?

You have performed a repeated-measures ANOVA and it showed no significant main
effect of "group" across both time points, and what is more important, it showed no
group*time interaction (correct?), which means that it could not be demonstrated that
the pre-post difference was different between groups.

As @katxt already mentioned, it is doubtful that post-hoc tests make sense here.
You had no statistically significant effects, so what are cell comparisons needed for?
But is there any way I can statistically underpin that the control group got better?
This objective is not clear to me. Would you like to demonstrate that one reason for the non-
significant interaction was that the subjects in the control condition improved? Then a dependent
samples t-test could perhaps be nice to have, but the descriptive statistics for the control group
would be sufficient, in my opion.

With kind regards

Karabiner
 
#5
Hi karabiner

Thank you for the reply.

Yes the groups were roughly equal at baseline. They were randomized into two equal groups of 11 subjects.

And yes group*time showed no effect.

What I'm understanding is that with no significant effect detected with repeated measures there is no need for post hoc tests?

I just also would like to know what you mean by the descriptive statistics for the control group?

I'm sorry if I ask stupid questions but am somewhat confused.

Thanks again.
Kind regards
Tobias
 

Karabiner

TS Contributor
#6
I imagined that you perhaps performed the within-group tests in order to demontrate that
non-signficant ANOVA results were based on san unexpectedly large improvement of the
control group. If one wanted to demnstrate that, one could think of a dependent-sample
t-test, but in my opinion, one could just describe the means (and standard deviations) of
each group at t1 and t2.

With kind regards

Karabiner
 

Karabiner

TS Contributor
#8
There is something magical about tests of signficance. You've been told three or four times that it does not make much sense here, but you take this as arguments in its favour... (no harm intended)

With kind regards

Karabiner
 

katxt

Active Member
#11
There are several t tests you could do, between groups before and after, and between before after for each group. None of these are really relevant to your project as I read it. The groups may differ and there may be an overall before/after difference not related to the intervention (practice, or familiarity with the experimenter for instance.)
The question you really want to know is "Has the intervention made a difference?" We want to find out whether or not the the difference in the experimental group is noticeably different from the difference in the control group. This is called a difference in differences (DID or diff diff) test or a BACI test in other disciplines. As karabiner has pointed out, this is equivalent to a significant interaction on the two way anova. This test takes into account differences between the groups, and non intervention differences between the times.
If you want a simpler method which is mathematically identical to the interaction method and gives the same results, then find the after - before difference for each subject. Then a t test on the two sets of differences will give you the p values for the DID test. (I misspoke before, this test is clearly not paired.) A significant result here shows the intervention made a difference.