I had a question about a common (possibly incorrect) trend in the medical field to look at whether two 95% confidence intervals overlap as a sign of statistical significance at the 95% level. I'm aware this is overly conservative. I've heard that 83% confidence intervals are 'better' but not seen any formal derivation and I'm sceptical.
Is there a generalisation of this 'method' to derive an exact p-value? Say I have two values, both with 95% confidence intervals around them (or standard errors), and I'd like to test the hypothesis that there is no difference.
Is there a generalisation of this 'method' to derive an exact p-value? Say I have two values, both with 95% confidence intervals around them (or standard errors), and I'd like to test the hypothesis that there is no difference.