computing a confidence interval for 1 variable / jarque bera test necessary?

#1
Say I have a data sample and want to compute the 95% confidence interval for the mean of this data sample.

If the sample is big enough I can assume it's normally distributed, right?

Or should I use the jarque bera test to check this assumption first?

How do I go on? Compute the variance and just select the critical value of the t-distribution (for the 95% confidence level) and then compute mean +- standarddeviationfromthemean * critical-t-value

Is that it?:)
 

JohnM

TS Contributor
#2
You can assume the means will be normal if n >= 30. Otherwise, use the t-distribution.

Check the Examples section on this site - there's a post on computing confidence intervals.