Large Sample Surveys

Dear Forum,

I encountered an employee satisfaction survey report which sampled 50,000 employees (try not to laugh, that's a sample not the population). The authors reported % agreement based on Likert scales, which is not great, but ok, and compared said "scores" to a normative benchmark (for example 65% of respondents agree with this statement, 35% disagree; norm is 83% agree). However, the part that threw me off was that they proclaimed that a 7% difference from the benchmark for any score is considered meaningful. Certainly, this is not the margin of error, not with n = 50,000. I've seen this before in other surveys where someone will state in a survey that any score above x% is a meaningful difference - I can only assume that they mean x to be the margin of error.

Can anyone explain how a 7% difference in score could possibly be "significant" / meaningful on n = 50,000? The only thing I could think of was the average total score change year over year... but that seems far-fetched.

so, either I missed something in survey stats 101, or that whole 7% thing is completely made up.

Last edited:
"Meaningful" is not the the same as "statistically significant". In studies with large sample sizes, statistically significant differences may be rather small in practical terms. In this report, the 7% threshold may simply be selected to filter out results "worth paying attention to". Whether 7% is the right number, is of course another question.
Thanks for the reply. I am quite familiar with the difference between practical significant and statistical significance. Large samples will find meaningless statistical differences due to bias from the large n. Other statisticians have suggested using Cohen's D (effect size). I suppose my question was more about the mechanics for surveys with such large samples (which is rather quite extraordinary for social sciences). For example, IQ tests (and similar assessments) use the Deviance IQ transformation, which has a mean of 100 and SD of 15. T scores have a mean of 50 and SD of 10; in a practical sense, psychologists will consider scores above the first standard deviation to be "meaningful" (although not statistically significant). This, I understand, to be because the "average" (68%) of the population falls within the first standard deviation. So, for these kind of employee engagement / satisfaction surveys, I'm not sure how they establish what is a meaningful cut-off (or difference from benchmark) if they are not using the SEM. Is there another technique or method that has empirical validity? Simply saying, we're going to cut-off right about here... makes no sense from a statistical perspective.
Any suggestions would help.
Thanks for the discussion!
I think they determine what they consider "meaningful" not from any statistics approach, but from experience and domain knowledge.

You can use an online tool to calculate the margin of error, like this one. With an infinite population, the margin of error will be around 0.4%. But since there are few companies with that many employees, I suspect a large proportion of the population is sampled and the margin of error may be even lower.