Possible dumb question

In a test we perform, Moisture Vapor Transmission Rate (MVTR), a specimen is weighed every 24 hours. The specimen gains weight from moisture in the air. At some point, the rate of gain becomes quite consistent. The final MVTR result is that which derived after typically ten to fifteen weighings over a period of two or three weeks, based on the slope the rate of weight gain. However, the application performs a rolling MVTR calculation at each weigh point, so we have a preliminary result each time the specimen is weighed. We also run correlation coefficient on the data.
One of our engineers believes that determining the standard deviation of the pool of MVTR results from each weighing of one specimen is a useful indication of variation or of "reliable data".
In such a population of data from one discreet test, I think calculating or utilizing SD is a misapplication of the statistic.
Who is right?
Thanks much.
I'm frightened by the phrases you use: "engineer believes", "I think".... Neither you nor the engineer should have any opinions until you both take at least Statistics 101.

Regarding your question: any statistic should be analyzed together with the corresponding standard error or p-value. By itself the statistic means very little. So the engineer is closer to the truth.
Well, hmm, ehh,.... of course anybody can have whatever opinions they like. As someone said "you can have any opinions but you can not have any facts". But of course you can ask about any facts, especially here at talkstats. Someone said that they wanted this to be a very frindly place. But it has happend that not so polite words have been said. (I am one of those who have been accused of beeing rude.)

About MVTR: If you measure that with the slope, then of course the slope parameter is of main interest. Then of secondary interest is the standard error in that slope estimate (the "uncertainty in the slope"). When you do a regression model it is of course the standard deviation in the residuals that is of interest, not the standard deviation in the original measurements. The engineer is a little wrong about that.

But isn't it about what is a good model and good descriptors of a model? A good model is a model that fits to the data.

In this case, doesn't the data converge to an upper asymptote of the sorrounding air like: L(1-exp(-b*t))+epsilon, where L is upper asymptote, t is time and b is the slope and epsilon is a random term.

This page says you should consider temerature and moisture around the sample.


Omega Contributor
So the engineer is finding the SD for a single test across time, that is how it read to me. Or is it the SD of multiple tests for that time point, which seems more appropriate.

my suggestion would be to collect data from multiple test and fit a model. It sounds like the rate is not constant, so I would think about fit a spline based model, and also generate the SEs so that you can fit confidence intervals. If the engineer was getting SD across time for a single test that wouldn't be too informative, if it was the later, it would provide more info.