Acceptable average difference value for scientific literature?

I have two instruments that measure the same parameter, I'll call them 1 and 2. 1 is calibrated and zeroed during the study and 2 is not calibrated and zeroed so the baseline values are all over the place.

I am trying to superimpose the data on a scatterplot by time matching 1 and 2. In addition to time matching, I also have to match the Y-axis. I am trying to match 2 to 1 since 1 is calibrated and zeroed. There are thousands of data points so sample size should not be an issue.

I figured if I look at the average difference (so difference between 1 and 2, then averaged) between 1 and 2, that would tell me in some sense how close I am to matching the Y-axis between the two data sets.

Statistically speaking, is there any acceptable value for average difference? Maybe something like an average difference < 0.5 is considered to be optimal? Eventually, I want to plot 1 vs 2 on a Bland-Altman plot.


Well-Known Member
As a first try, you could assume that x2 = a + b.x1 to allow for zeroing and calibration. Then regress x2 onto x1 and get estimates for a, b, and residual error.