Generalisation of 'overlapping confidence intervals'

#1
I had a question about a common (possibly incorrect) trend in the medical field to look at whether two 95% confidence intervals overlap as a sign of statistical significance at the 95% level. I'm aware this is overly conservative. I've heard that 83% confidence intervals are 'better' but not seen any formal derivation and I'm sceptical.

Is there a generalisation of this 'method' to derive an exact p-value? Say I have two values, both with 95% confidence intervals around them (or standard errors), and I'd like to test the hypothesis that there is no difference.
 
#2
its probably alot easier with standard errors than the 95% CI. Then you can construct a z-test as the difference / sqrt( sum of sqared stderr). This is probably anti-conservative. They have to be independant as well, otherwise the answer is basically no.
 
#3
its probably alot easier with standard errors than the 95% CI. Then you can construct a z-test as the difference / sqrt( sum of sqared stderr). This is probably anti-conservative. They have to be independant as well, otherwise the answer is basically no.
Exactly what I needed. Thanks for that :D
 
#5
There is a derivation of the 83% here.
https://chris-said.io/2014/12/01/in...rval-a-useful-trick-for-eyeballing-your-data/
Informally, significance starts at about half an error bar overlap. I've used the attached spreadsheet at workshops to illustrate.
Thanks a lot for this and good to see the same formula I was already using come up. Was surprisingly hard to find a good answer by searching keywords like confidence interval p-value equation. This forum has been very helpful, many thanks to both above.