we just had a very interesting discussion about the role of confidence intervals and p values in determining significance and I realised there is a point that is quite unclear to me, so, here is the question:

I am considering a simple situation, a two sample proportion test, say. For each sample I get a confidence interval of the true value of the proportion in the group and also one p-value for the two groups to test the null hypothesis that the groups do not differ, the true proportions in the two groups might actually be the same.

Now, if the confidence intervals do not overlap, this will result in a low p-value, so in this case the confidence intervals give the same result as the p-value.

If the confidence intervals overlap, this is not sufficient to conclude that the p-value would be high - there are cases where the confidence intervals overlap but the p-value is still below 0.05 - so the two give us contradictory results.

My question is from a practical point of view - significance is of interest to determine whether there could be a real difference between my groups. So, why should I trust the p-value MORE then the confidence interval overlap when it comes to deciding whether the difference is real or not?

I guess, the statistical significance is defined as low p-value , but what I would be interested in is the real life decision.

thanks a lot

rogojel