- Thread starter hurzz
- Start date

What would you suggest?

With kind regards

Karabiner

With kind regards

Karabiner

Ho: difference in proportions=0

Ha: difference in proportions !=0;

So I would end up with 3 hypothesis tests (Golden Palm x Palm Royale; Golden Palm x Palm Princess; Palm Royale x Palm Princess)

but I'm wondering if there is a more straightfoward way to do that!

"How could I test if there is a **significant difference in the proportion** of customers that would return for each hotel?"

All you need do is calculate NO/REPLIES, 41%, 14% and 26%, there are obviously significant differences between each pair.

Its easy.

joe b.

All you need do is calculate NO/REPLIES, 41%, 14% and 26%, there are obviously significant differences between each pair.

Its easy.

joe b.

You are wrong. If that was the case there would be no need for statistics.

This case is one where we know everything we need to know by looking at the data.. We can dazzle the uninformed, bit bring nothing to the table.

The second is where we lack the data to make a reasonable statement-we need to stop, quit, or get more data.

Statistics is like a hammer, just because you have one doesn't mean you have to hit something.

joe b.

No, Dason, you are wrong. I learned many years ago, and taught my students, that there are two times when statistics is inappropriate.

This case is one where we know everything we need to know by looking at the data.. We can dazzle the uninformed, bit bring nothing to the table.

The second is where we lack the data to make a reasonable statement-we need to stop, quit, or get more data.

Statistics is like a hammer, just because you have one doesn't mean you have to hit something.

joe b.

This case is one where we know everything we need to know by looking at the data.. We can dazzle the uninformed, bit bring nothing to the table.

The second is where we lack the data to make a reasonable statement-we need to stop, quit, or get more data.

Statistics is like a hammer, just because you have one doesn't mean you have to hit something.

joe b.

This case is one where we know everything we need to know by looking at the data.. We can dazzle the uninformed, bit bring nothing to the table.

The second is where we lack the data to make a reasonable statement-we need to stop, quit, or get more data.

Statistics is like a hammer, just because you have one doesn't mean you have to hit something.

joe b.

Beyond sampling error, I would ask whether these are random samples and whether subjects are exchangeable between groups?

If the generic question was whether there are differences between the groups in responses, I would define my level of significance, then conduct the three rate difference calculations and then adjust for false discovery in the confidence intervals and plot. Or if using a Bayesian approach repeat the process and plot the three posteriors distributions for rate differences. Three comparisons are likely the most informative result compared to binomial test with just p-values or looking at the standardized residuals from chi-sq.

If the generic question was whether there are differences between the groups in responses, I would define my level of significance, then conduct the three rate difference calculations and then adjust for false discovery in the confidence intervals and plot. Or if using a Bayesian approach repeat the process and plot the three posteriors distributions for rate differences. Three comparisons are likely the most informative result compared to binomial test with just p-values or looking at the standardized residuals from chi-sq.

Last edited:

All you need do is calculate NO/REPLIES, 41%, 14% and 26%, there are obviously significant differences between each pair.

Its easy.

joe b.

While this specific case does show distinct differences, note the wide confidence intervals around each hotel's proportions. Slightly smaller sample sizes and/or smaller effect sizes and the differences would not be statistically different.

So your solution is to not try anything? If your advice is just to merely look at the sample estimates and then call it a day then I'm not sure why you feel like you should be dispensing advice. Accounting for variability is the reason statistics as a field exists. If you have a specific reason in this case why you think doing something like a chi-square test is inappropriate then I'd be happy to discuss your objections. But if all you have is "yeah it looks like there are differences so obviously there are" then you're just plain wrong. I could come up with lots of examples where just looking at the data might trick some people into thinking there truly are differences even though there aren't.

If the generic question was whether there are differences between the groups in responses, I would define my level of significance, then conduct the three rate difference calculations and then adjust for false discovery in the confidence intervals and plot. Or if using a Bayesian approach repeat the process and plot the three posteriors distributions for rate differences. Three comparisons are likely the most informative result compared to binomial test with just p-values or looking at the standardized residuals from chi-sq.

Why don't you show us that I'm incorrect. Anybody?