multiple comparisons

#1
Hi,
I'm a vet doing a research project, but am not really strong in stats. I have a question about whether correction for multiple comparisons is needed in my study.
I am looking at 100 dogs (half with and half without disease). The disease is caused by a skull malformation. I have done CT exams (CAT scans) on all. I am measuring several areas of the skull to find any measurements that correlate with disease ( have done Student's t-tests). Prelim numbers show two (of about ten) bone measurements may be significant (P<<0.05).
Others in my dept think I need to correct for multiple comparisons since I have measured several structures. I cannot get my head around why I would need to in this particular case.
Each dog is either sick or not -- that makes two groups. Each subject already has each bone I measure (and a whole lot more I don't measure). At no point are these bone measurements compared to one another. Each is a fixed value independent of the others, and cannot change based on any other measurement. If the height of a bone correlates with disease status, but the width and length do not, how could my simply knowing those measurements weaken the significance of the height?
I seems that I should just ignore the eight insignificant measurements on my next 50 dogs, and guarantee the significance of the other two. But that seems crazy.
I do understand why, for example, a control group and three other groups who have been given drugs A, B, and C would need to be corrected, but those are different subjects with different treatments. But this is not my case.
I eventually have to defend these numbers so I need to understand them. Anyone have an opinion on whether or not I need to correct these values for a million comparisons? I'd love to hear it.
Thanks for your time and expertise.
JK
 
#2
When I set alpha=0.05, I am saying that I only want my test to reject the true null hypothesis 5% of the time. If I assume that my 10 tests are independent, and I do each of the tests at the alpha=0.05 level, the probability that I rejected one true null hypothesis out of all of my tests is NOT 5%. In fact the alpha for my entire experiment is (# of tests)*(alpha)=0.50, or 50%!!!

So now rather than a relatively low false positive rate, I could flip a coin and see if you rejected at true null.

That is the intuition behind multiple testing procedure. I will let others on this forum make recommendations on how to relieve this, but as a statistician I am more interested in wondering if you do really have 10 independent tests, ie are all of your measurements independent from each other, or is there a correlation structure in your data that will enable you to block measurements.

Best
 
#3
hockeyfan,
I understand the rationale for the correction in many study designs, but I am not comparing Control to A, Control to B, A to B, etc. Whether or not I measure the one or two values I care about, or 100 different skull structures, each stands alone as a predictor of clinical disease. I could ignore all of the other measurements in the next 50 dogs and have a very significant correlation in one of my measurements. Can you see the source of my confusion? It's not that I don't understand what multiple groups do to your alpha, it's whether or not I have multiple groups.
Thanks,
JK
 

bugman

Super Moderator
#4
Im not entirely clear what you independent variable is. Is it presence/absence of disease (binary)?

It sounds like you have two groups (dogs with and dogs without the disease). Are these multiple measurements taken from each dog, or is dog 1 measured at region x and dog two measured at region y etc...?

Phil
 
#6
bugman,

each dog is assigned to one of two groups -- positive for disease or negative for disease -- based on exam, history, etc.
each dog has the same set of 10 skull measurements made.
all of my results for each single measurement (across all dogs) is compared to disease status to see if the two groups (sick or not) are the same/different in regards to this measurement, using a t-test.

so I am not manipulating a variable, but I am measuring several -- does that make the measurements the dependent variables?
anyway, I hope that makes the design more clear.
Thanks,
JK
 

bugman

Super Moderator
#7
Ok, so you want to see if the ten different measurements are related to status? sick or not.

The main issue here is independance of the measurements. Because come from the same animal you have to assume non-independence.

It sounds to me you should be analysing our data with either a one factor MANOVA or a dicriminent function analysis, depending on your objectives. DFA will focus on classification while MANOVA will focus more on between group differences. Im a bit rusty on the details because I haven't used them in a while, but hopefully this put you on the right track.

Phil.
 
#8
Thanks for the suggestions, and I will look into them. But for my better understanding I still would like to be comfortable with the main question I am asking. Would a study design such as I've described, with no comparisons being made between measurements, require correction for multiple comparisons?

Again I am not looking at several groups and looking for a difference among them. If I choose to ignore all but the two measurements that have shown extraordinary promise in my preliminary data, my remaining measurements are still honest, true and valid, and will show significance.

Thank to anyone who cares to respond and educate me a bit.
jk
 
#9
let us take 10 measurements as 10 characteristics.
and you have two groups already defined: sick and non-sick.
then, i think multiple comparison is not needed. (student's t or what bugman suggested will do)
IF, if, if you want to do more, and correlate the 10 characteristics with probability of desease, you can do logistic regression , with sick/non-sick (binary) as a dependent variable and 10 characteristics as independent.