Confidence Intervals. Why Just One Sample?

#1
A typical problem in a Stat Textbook for creating a confidence Interval sounds like this: A sample of 50 blood pressures is taken and the sample has a mean of 120 with a standard deviation of 5. Construct a 95% confidence interval. What troubles me is that only one sample is enough. Intuitively, would it not be better to take numerous samples of size 50 and use the mean of the sample means, as well as the std dev of the sample means?
 

ondansetron

TS Contributor
#4
A typical problem in a Stat Textbook for creating a confidence Interval sounds like this: A sample of 50 blood pressures is taken and the sample has a mean of 120 with a standard deviation of 5. Construct a 95% confidence interval. What troubles me is that only one sample is enough. Intuitively, would it not be better to take numerous samples of size 50 and use the mean of the sample means, as well as the std dev of the sample means?
Ideally, we would just obtain the population data and we wouldn't need a CI, but we can't, and it's also not very practical to truly take repeated samples. The assumptions behind the interval generation (random, representative sample from an approximately normal distribution, or from a sample large enough to have a relatively normal sampling distribution, for example) allow us to benefit from good properties of the method. Additionally, you can use resampling techniques to simulate what you've described. The results can be quite accurate.
 

Dason

Ambassador to the humans
#5
If we were able to take lots of samples of size 50 why wouldn't we just take one big sample of size (number of samples of size 50 we could take)*50?

If you think your approach is better than one big sample would you consider breaking a single sample of size 50 into 50 samples of size 1 to be a better approach?