Covariance: The result holds even when changing one of the means for any constant

One of my students ran into a curious "property" of the covariance, and I checked it, but I really don't know what is the reason for it.
I don't think this has a real world application, but I would like to know the reason anyway.
When we calculate the covariance of two variables X and Y, we can make a mistake in calculating one of the two means (the ones that we use as an input to calculate the difference between each value and the mean in each variable, which in turn are multiplied between them) and the result is the same as if we have used the correct mean. We tried using an incorrect mean in both variables but the result doesn't hold, it just holds when only one of the means is incorrect.
I don't think it is exactly a re-scaling issue and neither one of the properties of the covariance [Cov(X+a,Y)=Cov(X,Y)]
Thanks in advance!


Doesn't actually exist
Because the "incorrect mean" cancels out when you expand the definition of the covariance. I'll show you a special case and you can try any more cases if you want.

Say that you have 2 random variables X and Y but, for whatever reason, the population mean of X is scaled by some constant a, so you're not working with \(\mu_{X}\) but with \(a\mu_{X}\). Then you can do the following:


the same logic works if you were doing something like shifting the mean of X by a constant, so you're now working with \(\mu_{X}+a\). In general, if you call your "incorrect mean" \(\mu^{*}_{X}\) so that it is incorrect in whatever way you want, eventually it cancels out... unless it's "wrong" in some weird way.
Last edited: