Central limit theorem

#1
It's been years since I've taken a statistics course. I recall taking several of them. But, apparently I took them to little good effect, since a friend recently posed a simple question that is really bugging me.

According to the central limit theorem (CLT), the standard deviation of sample means approaches sigma/sqrt(n), where sigma is the standard deviation of the population and n is the size of the samples.

I'd long since forgotten about the CLT and found myself running thought experiments. In my experiments, I used samples of size m, where m is the of the population. I think that's the source of my first mistake (since there's no notion of population size inherent in the CLT at all). In any case, that idea led me to think that the standard deviation of the population is the limit, as n -> m, of the standard deviation of the sample.

As a result, I was surprised to be reminded that the CLT tells us that the standard deviation of the sample mean actually gets smaller, and further from rather than closer to sigma, as n increases. It seemed reasonable that the standard deviation of the sample mean would better approximate the standard deviation of the population mean as the sample size increases.

I'm sure that statistics students must get caught in similar paradoxes from time to time. Is there a simple explanation for the flaw(s) in this apparently logical reasoning? I'm sure the source is an excess of calculus, which leads to seeing limits everywhere. :D

Thanks for your thoughts!
 
#2
Actually, as is often the case, carefully writing out the problem exposes the solution. I now see a subtle equivocation between sample parameters and sample parameters as estimators of population parameters. I think that I can work out the explanation given some time, needed for rest. :D