Simple regression

#1
Hi,

I am having a problem and I know a way to solve but don't know how to statistically implement it. I'm trying to summarize the problem in a simple way here:

Say I am regressing a dataset, X agianst another dataset, Y.

Now, if I wish to capture how much is the disagrement of this regression line from a hypothetical "perfect regression line" that passes exactly through all observed data points (may be with keeping the intercept same as before).

Then, I wish to ADD that disagreement-component (of X and Y) to another dataset, say M.

Now, how do I capture the disagreement, and then add that to another dataset !

Any help would be very much appreciated.

kind regards,
Chintanu
 
#2
it seems like your "perfect regression line" would be a non-linear function, but your regression line would be a straight line. I don't know how you would compare them. Or are you talking about SSE, SSR, and SST stuff?
 
#3
Many thanks for your reply, Beemzet.

Ok, let me put it this way:

If I call my regression line as 'm' and hypothetical regression line as 'n' ('n' has slope =1, intercept = same as 'm'), both m and n are linear.

Now, I wish to capture how much is the shift of 'm' to reach 'n' and then, ADD that "shift"-component to another dataset, say M.

Wondering how do I do it!
 
#4
The regression line found by Excel or SPSS or else will be the best fitting line or which minimises total square of vertical deviation.That slope is m. Why you want n ( Some other slope) ?
 
#5
Yeah indeed, the most perfect line you can get is the OLS line since this is unbiased (in most cases) and minimizes the sum of squares... If you then calculate the difference between observations and the line, you get the errors, and if you square them and take the sum, you get the sum of squared errors... which is what you are looking for I guess...

But my question: why? OLS already minimizes these errors (because alpha and beta estimators come from the objective function that minimizes these errors)