According to the central limit theorem (CLT), the standard deviation of sample means approaches sigma/sqrt(n), where sigma is the standard deviation of the population and n is the size of the samples.

I'd long since forgotten about the CLT and found myself running thought experiments. In my experiments, I used samples of size m, where m is the of the population. I think that's the source of my first mistake (since there's no notion of population size inherent in the CLT at all). In any case, that idea led me to think that the standard deviation of the population is the limit, as n -> m, of the standard deviation of the sample.

As a result, I was surprised to be reminded that the CLT tells us that the standard deviation of the sample mean actually gets smaller, and further from rather than closer to sigma, as n increases. It seemed reasonable that the standard deviation of the sample mean would better approximate the standard deviation of the population mean as the sample size increases.

I'm sure that statistics students must get caught in similar paradoxes from time to time. Is there a simple explanation for the flaw(s) in this apparently logical reasoning? I'm sure the source is an excess of calculus, which leads to seeing limits everywhere.

Thanks for your thoughts!