# A>0 B<0 is A>B ?

#### Cel

##### New Member
I was wondering if you could help with a very quick statistics question - if in condition A subjects are found to perform significantly better than chance and in condition B subjects are found NOT to perform significantly different from chance are there any conditions under which I can say based on these two results ONLY that subjects perform significantly better in condition A than in condition B ?

e.g. if chance level was 0 and condition A average say 55 and condition B average -10; and condition A was found to be significantly different from 0 but condition B NOT significantly different from 0, can I say, based SOLELY on these analyses, that in condition A subjects performed significantly different from condition B ?

I hope this makes sense and is easy to answer ... Many thanks !

Cheers

#### Xenu

##### New Member
e.g. if chance level was 0 average say 55 and condition B average -10; and condition A was found to be significantly different from 0 but condition B NOT significantly different from 0, can I say, based SOLELY on these analyses, that in condition A subjects performed significantly different from condition B ?
No, you cannot. There is a chance that B actually is much better than 0, but performed badly because of randomness. You have to compare groups to make such conclusions.

##### New Member
Maybe I'm misunderstand your problem, but I'm not sure why this is an issue. Statistically comparing these two groups doesn't take much time or effort. If you need help with that, let us know.

#### Cel

##### New Member
Thanks. Here is another reply i got elsewhere.

The basic answer is `probably yes', that if Condition A has significantly > 0 and Condition B is non significantly < 0, then Condition A is significantly > B, but there are a couple of ways to be sure...

Firstly, when fitting a linear or generalised linear model, an 'identifiability constraint' is used. One common one is 'sum-to-zero' where the parameter estimates for all the groups will sum to 0, and another is to set one of the groups to 0, so that all the others will be in comparison to that.

I guess the program you are using does the latter, with the control group set to 0? If so, then you could change it so that group A or B is set to 0, and then you can see if the other is significantly different. This would be a statistically sound way of testing group A and B.

A second way would be using confidence intervals. If you haven't come across these before http://en.wikipedia.org/wiki/Confidence_interval seems to cover it fairly well. Basically, if you want to test whether A=B at the 5% significance level, you look at the 95% confidence intervals for A and B, and see if they overlap. If they do, then you do not reject A=B at the 5% level, and if not, then you do reject.

I was wondering whether I should do 2 by 2 RM-ANOVA or multiple paired t-tests and the above question popped up ...