Incorporating physical limits into uncertainty

#1
I am working on a method for incorporating physical "limits" into error analysis, specifically a 95% confidence interval.

For example, consider a process where a known mass of A enters a system, and after the process occurs, masses B and C exit. We can measure the mass of B but not the mass of C. Since mass is conserved, the mass of C is equal to the mass of A minus the mass of B. After repeating the experiment and measurement n times, we could have an estimate of the mass of C that looks like 0.2 kg (95% CI, -0.2 - 0.7).

Here is where my question comes into play: since we cannot have a negative mass of C, is there a valid method for "truncating" the lower bound of the uncertainty?

I considered just changing the lower bound and writing 0.2 kg (95% CI, 0 - 0.7), but this opens up more questions. First, is this now a 97.5% CI, since the left tail the uncertainty is cut off, should I have started with a 90% CI to get to 95% after this method? Moreover, this method produces strange results for more extreme examples: if the mass of C started out as -0.2 kg (95% CI, -0.5 - -0.1), do we correct this to 0 kg (95% CI, 0-0)?

I am wondering if there is a standard math based method to perform this adjustment.
 
#2
I am wondering if there is a standard math based method to perform this adjustment.
Yes, use a error distribution that actually reflects the physical limits, and Voila. Usually in stats-land this is done by taking a 'link' function in a 'generalized linear model'. A prime example is proportions which are between 0 and 1, so what you do? You take log( p / (1 - p) ) is in [-inf, +inf], compute CI log( p / (1 - p) ) on and invert to get interval in 0,1. Other example is taking logs of log-normal data, this gives ci > 0,

This can be tricky depending on the circumstances,
 
#3
Yes, use a error distribution that actually reflects the physical limits, and Voila. Usually in stats-land this is done by taking a 'link' function in a 'generalized linear model'. A prime example is proportions which are between 0 and 1, so what you do? You take log( p / (1 - p) ) is in [-inf, +inf], compute CI log( p / (1 - p) ) on and invert to get interval in 0,1. Other example is taking logs of log-normal data, this gives ci > 0,

This can be tricky depending on the circumstances,
Thank you for the answer. I'm very much a beginner with all of this so I'm going to read up on these before going further. I really appreciate it, before your answer I didn't even know if I was asking the correct question.

For now though, I am wondering if changing the error distribution works in every case. What if the mean of my calculation was already negative, but must be between 0 and 1 (physical limits). Here, it does not seem correct to assume an error distribution that is between 0 and 1. I was considering something like a truncated normal distribution though...
 
#4
hmm, if you got a negative value then i guess it is physically possible to get a negative value. Thats what i know about physics!

Probably the thing here would be that you are getting negative values because of the noise of the scale when you are weighing things or something like that, right? I think it might be tricky and or non-solvable in some cases. For example suppose I got a 1 kg standard and weighed it on the worlds crapiest scale, which subtracted 2 kg and added some noise to boot. If I did not inform you of the properties of this scale, how would you identify the bias (subtracting 2 kg)?

But in specific cases it may be possible to construct an experiment or stats model that can identify the error caused by the bad scale. I'd say that's sort of stats in a nut-shell!

Another way to think about it, is in non-linear models it is common to place constraints on some parameters, and so the estimators that resulted under such models would be constrained to be in your happy zone. I think this sort of constrained optimization is a little bit cheating though.
 
Last edited:
#7
good luck! you can tell me what the last part says, looks pretty technical. Something that might not be obvious is that there is a one-one type 'tautology' between hypothesis tests and confidence intervals. These notes talk about tests only, but they are one in the same. Given a stats test, the confidence interval is the set of parameters (like population mean or whatever) that would not be rejected by the stats test. Or the opposite of what I said.