# Calculating effect size.

#### psychuk

##### New Member
Hi,

I'm learning about ANOVA's and understand that to report results I have to calculate effect size. We have been taught to calculate W^2 (Omega squared) first and then carry on with comparison tests. Can you tell me if the W^2 has to be significant before computing comparison tests or is this just for information in the write up?

Many Thanks

#### noetsi

##### No cake for spunky
Omega squared tells you if the independent variables are signficantly explaining the variance in the dependent variable. If they are not signficant influencing the DV it may make no sense to have an effect size, since none should exist. This is sort of like calculating how much water is running through a pipe (effect size) when you know the pipe is not working - why should you expect any result?

#### Eugenie

##### New Member
Is it considered best practice to calculate the significance of omega squared in ANOVA settings?
This is the first place I have heard of it. Everywhere else I have read (which isn't very advanced texts) just
does a significance test on the F statistic, then does a follow-up test to get confidence intervals for
comparisons by contrasts, Bonferroni or Tukey-Kramer.

What is the advantage of the omega squared statistic over the F statistic? Why do ANOVA tables not calculate
this?

#### noetsi

##### No cake for spunky
You don't do omega or eta squared rather than the F test. They tell you different things. The F test tells you if your model has any predictive value (literally if the group means are the same in all cases or not the same in at least one case). Omega squared tells you essentially how strong your model is, that is how much of the DV is being explained. It is similar to the difference between the F test and R squared in regression.

Most commerical software report eta squared which has signficant problems, but is older and better known than Omega squared. Why eta squared or omega squared is not reported, if this is the case, I have no idea. In many cases reporting is done because years ago when it was formalized the capacity to calculate a specific parameter was such that few could do it. And practice did not catch up with new computing capacity

I am not sure what best practice is in the literature although it makes sense to report both.

#### Eugenie

##### New Member
Is omega squared something that really only makes sense to calculate for random or mixed effect models or for
situations when randomisation into cells was not possible?
It seems like for fixed effect models and a randomized experimental design, it shouldn't really tell you anything useful.
I can see how it might be useful otherwise. But in those cases as well, even if the effect is small overall, if there is a significant
F value, it seems that you can attempt by Bonferroni, eg, to determine which means are significantly different (even if
this difference may not be large).

#### noetsi

##### No cake for spunky
No omega squared can be used in any ANOVA (at least according to my text). It tells you the same thing in a fixed effect as in any other design. How much the IV explain the DV.

#### Eugenie

##### New Member
I think I have a variation on your plumbing metaphor. Suppose there are several taps in a house, but they all of course
link to the same main. The F test tells you if there is a leak in the house somewhere--it is basically a test on the main
pipe. If there is a leak, you can already directly try to find where (Bonferroni). Alternatively, you can first ask how much is leaking altogether in the house (omega squared), then find where. I suppose if there are enough taps (groups), it is possible to have a large leak altogether (large omega squared) even if the amount leaking from each tap is negligible (confidence intervals all include zero).

Is that right?

Incidentally, there was mention of significance of omega squared. What distribution is used for this?

Also, I can understand situations where we care about the Bonferroni confidence intervals--for instance, if you are considering a randomised medical trial comparing three treatments, you really ultimately want to know how much better the new treatment is than the previous ones, not particularly how much of the variation in patients in
the trial is accounted for by the difference in treatments.

I suppose that if you are trying to determine what factor is most responsible for explaining differences in outcomes,
this is when you might compare omega squared statistics. For instance, if you are trying to tell if school attended or parental income is more responsible for A level results, you could compare the omega squared statistics for
1-way anovas done on these two variables? Is that right?

Last edited: