Prediction interval for known variation and unknown mean

#1
I am trying to predict a probable range for a sample set which has a known standard deviation but the mean is calculated from only a few points.

For example if a bunch of parts are manufactured and measured several times, I would expect the standard deviation of the measurements to be similar from part to part however the mean would be expected to be different for each part. Both the manufacturing and measurement process are expected to be normally distributed. How would I calculate the expected range for future measurements on a specific part given a few initial measurements and the standard deviation from a much larger sample set?

Could I calculate the confidence interval (95%) of the mean and then add 3*sigma?
 

obh

Active Member
#2
Under the normality assumption for the average.
(For a small sample size, the data distribution should be close to normal)

When you know the standard deviation you use a confidence interval based on this standard deviation (sigma):
Average ± Z (1-α/2) * Sigma/sqrt(n)

When you don't know the standard deviation you calculate the sample standard deviation (S)
Average ± T (1-α/2,n-1) * S/sqrt(n)
 
#3
I think i need to calculate the prediction interval as I am interested in the range of future samples not just the mean

here is what I came up with...

starting with CI calculated for a known sigma:
CI=Average ± Z*Sigma/sqrt(n)
essentially calculating the sum of the squares of the mean variation (from CI equation) and population variation (Z*sigma) yields:
PI=Average ± sqrt[(Z*Sigma/sqrt(n))^2+(Z*Sigma)^2]
this simplifies to:
PI = Average ± Z*Sigma*sqrt(1/n+1)
which aligns with the definition of PI I found in a reference text except that Z and Sigma are substituted for T and S per the previous post.

to correct for a finite population:
PI = Average ± Z*Sigma*sqrt[1/n*(N-n)/(N-1)+1]

Please let me know if this is incorrect.