Help to reply to reviewers. Is student t-test = anova for two groups?

#1
Hi,

I have received the reviews for a paper I wrote where I have a rather complex design. But for my question I explain only the relevant bit. In a between participants design, I present participants with one of two scenarios (hedonic vs. utilitarian goal).

As the classification was not previously used, I did a manipulation check measuring participants' judgments on five differential semantic scales assessing the hedonic/utilitarian dimension.

As the Cronbach's alpha was high (.89) I averaged them in a single measure. Then I did a t-test to check if there was (the expected) difference between the two groups (i.e. those reading the hedonic scenario and those reading the utilitarian one).

Two reviewers criticized the use of student t-test, one stating "this is not a suitable action, why not F? please correct", the other stating "a t-test is not suitable for statistical analysis, eg. it does not evaluate the presence of interaction, seek a statistician's advice"

I am 99% positive (but would like feedeback) that:
- a t-test and a one way anova with two groups are exactly the same
- a t-test is appropriate to do a manipulation check to test the difference between two groups.

Am I wrong?

Thank you
 
#2
I dont understand the comments of the reviewers at all. A t test is a statistical test, commonly used in statistical analysis (including ANOVA and regression). A f test shows the overall strength of the model, if you have only one independent variable it should yield exactly the same result as a t test (or at least it does in regression). I dont understand how you could have interaction in your model, that requires multiple independent variables which I dont see mentioned.

The ANOVA F statistic will yield the same substantive results as a independent t test as long as only two levels are being compared. As you increase the number of levels being compared (say you are comparing how three levels of an IV influence two levels of a DV) then the independent t test will generate family wise error, the ANOVA test addresses this and should not.

I would think that a t test would be fine if you have enough data (high enough sample size) and its normally distributed and you can calculate a mean.
 
E

EliteResearch

Guest
#3
Whether or not you are technically correct in your decision to use a t-test instead of an ANOVA, what comes into play in publishing is expectations in your field. Reviewing is (typically) an unpaid, obligatory activity and reviewers tend to use heuristics ("rules of thumb") when processing the information in your paper. Those heuristics include comparing what you've done to what is standard in the field. The best way to get your paper rejected is to argue with the reviewers on a matter like this. If either test will work, then do the one they want you to do, thank them for their insight, and add a line to your CV. Were this a matter of theoretical importance and their suggested revisions would change the nuts and bolts of your paper, by all means (politely) address their concerns and stick to your guns. This is not one of those times.

-Amanda
1925 E. Beltline Rd. Suite 200
Carrollton, TX 75006
(800) 806-5661 (972) 538-1374
consulting@eliteresearch.com
 

hlsmith

Less is more. Stay pure. Stay poor.
#4
If two reviewers had a comparable complaint about your statistics, I would say you need to better describe the analyses in your methods section. Or perhaps you need to provide us with more information, so we can attempt to determine what the reviewers are seeing.

It seems the 2nd reviewer believes you need to control for the mediating effect of another variable. Is that true and does that seem reasonable that you are not controlling for another variable that could effect the relationship between your DV and IV? We definitely need more information.
 
#5
Elite research was completely right in the comments made. Their are biases in every academic field tied to what is done methodologically in elite journals most often. Even if you are right you won't win arguing with them. Unless what they say is fundamentally wrong you have to go with what the field believes to get published (at least in regards to methods if not theory).

One of my professors, very sharp in methods and substance, told me that a journal demanded that she take her interval data and make it bivariate using logistic regression to analyze it. This makes zero sense, you should never turn interval data into a bivariate one simply to meet the common usage in the field. But it was do that or not get the article published and she made the pragmatic decision to get published.
 

hlsmith

Less is more. Stay pure. Stay poor.
#6
Reviewers typically do not see each others comments during the first review process until they are submitted to the authors or if one of them is the editor. I would go back to if they both questioned your approach there could be a problem somewhere and if they are reflective of the readership of the journal, subsequently they too may also question the approach and respectively the results.
 
#7
Thank you all for your replies.

Thank you for reminding me to be pragmatic..
I was probably not going to argue with reviewers (I know it is a lot of work, as I do it myself) but as two reviewers commented a similar point, I started to doubt.

I'm reassured by your comments (thought I am still confused by the comment about t-test not testing for interaction, as there is only one independent variable at this stage). I am going to make it an F hoping for an extra line in my cv!
 
#8
I am completely baffled how you can have interaction with one indpendent variable. I would be tempted to ask the reviewer this (except for the fear that either I missed something obvious or they did).:p BTW from previous comments from proffesors you don't have to accept every change demanded. If you can make a good case for not doing so the editor may agree with you. I had a major professor who told a journal editor that he would not make any changes and he looked forward to seeing him at the convention in Las Vegas (apparently they were old friends). The article, I was actually first author on it, got printed and in a pretty good journal.

Of course he had 200 plus peer reviewed articles at that time.... The rules that govern elites in a field and relatively new authors varies a lot. But if you really think they are wrong, you have to balance getting published with getting published saying something that is methodologically flawed. What you publish is your reputation in academics.
 

CB

Super Moderator
#9
Two reviewers criticized the use of student t-test, one stating "this is not a suitable action, why not F? please correct", the other stating "a t-test is not suitable for statistical analysis, eg. it does not evaluate the presence of interaction, seek a statistician's advice"

I am 99% positive (but would like feedeback) that:
- a t-test and a one way anova with two groups are exactly the same
You are correct. t and F distributions are really closely related: If a variable X follow a t distribution with m degrees of freedom, then X^2 follows an F distribution with df1 = 1, df2 = m. The p values from the two tests will be identical in the case of one dichotomous IV (you can check this obviously).

It is weird that both reviewers commented on this though. Is there no chance that they might be implying something more substantive, like they actually believe you should be testing for the effects of more than one IV?

On the other hand, if the comments really are as senseless as they seem to be at first look, then I see nothing wrong with your response politely pointing out that the F and t tests are equivalent in this situation and produce the same p value. I understand the pragmatic wish not to rock the boat, but with a bit of diplomacy and careful explanation this shouldn't be the kind of thing that results in a nasty argument.
 
#10
This nice artwork can give you some interesting hints on how to respond.
Oh I am using that next time I have to reply to nonsense reviews!

I have also reviewed several papers, and at time often worried I misunderstood the authors completely. Sometimes I highlight things just to explain its isnt clear enough for me to understand, perhaps its worth reading the section they are referring to and just checking its clear what you have done.
 
Last edited: