I have read a lot on the internet of examples of differences between the two scales, however I do not understand why statistical tools such as the relative standard deviation are used for the ratio scales and not for the interval scales. I found everything and its opposite on the internet. So I have a hypothesis and I would like to know if it is valid? Imagine that we are working on a ratio scale and imagine a variable X. Then define the variable Y such that Y = aX. My hypothesis is that the statistical tools that can be used for a ratio scale are those for which the result for X and Y will be equal. Now imagine that we are working on an interval scale and define Y as Y = aX + b. Then the statistical tools that can be used are those for which the result of X and Y will be equal. We cannot therefore use the relative standard deviation for interval scales because RSD (X) is not equal to RSD (Y). But on the other hand we can normalize the data of the 2 variables whatever the scale. (By normalization I mean defining a score zi = (xi - mean (x)) / standard deviation (x)). Here is my hypothesis. Can you validate it or refute it and explain it to me please?

Thank you