I have a polymer degradation study to perform and I am trying to determine a justifiable number of samples to test per time point. I found numerous polymer degradation publications they all used n=5 per time point. Does anyone know where n=5 comes from?
That is a very particular question, not sure anyone here will know the answer. I would imagine, cost, time, precedence, and size of difference along with variability should come into play. You could potentially simulate the process to explore sample sizes and power.
This is pure speculation, but the following was very commonplace before calculators. A sample size of 5 was used because A) it was an odd number and made it easy to determine the median, and B) it made it easy to calculate the mean by hand (add the numbers, multiply by two and divide by 10). This was important in industrial statistics because it was done by floor personnel without slide rules.
You can see the carryover of this in the use of a subgroup size of 5 for Xbar/R control charts. Using a subgroup size of 5 today is pure inertia.