Compute score from variance or correlation coefficient?


I’m seeking inspiration in creating a scoring (figure of merit) function that can evaluate how close a set of parallel measurements are to a set of truth values. The truth values aren’t a statistical mean per say, but rather target values being sought. A set of measurements together represent a measured state which is compared to a desired “truth” state. Each new measurement is independent of the previous one.

I’m neither predicting nor estimating states but rather evaluating the aggregate error of a set of events occurring at the same discrete time instant. Each event is in its own dimension/domain. So it’s a little like seeking the correlation coefficient in a regression analysis but data isn’t being fitted because truth data already exists, the data is at one time-point across a plurality of axes, and we just want to peek at the quality of the measurement.

I thought I could compute a sample variance using each dimension’s truth value as the sample mean, but I don’t know if that’s mathematically sound.

Ideally, I’d like a score with a bounded range ... something like the correlation coefficient in range 0.0 to 1.0 (

Is anyone able to help? :)

I found a solution that appears to be working well. For those interested, read-on ...

I simply calculate the relative error of each measurement then use these to calculate the score as a weighted root-mean-square. This represents an "averaging" of the total error and the goal becomes to find the smallest possible score.