Linear regression and vectors: Help

#1
Hi everyone, in Statistics I'm studying linear regression and least square methods but I haven't great knowledge about linear algebra and vectors. If I understood correctly, my notes say that, to find the approximative regression line, I need to find the value of y(hat) that minimizes 1578252030994.png and this value can be found in the orthogonal projection of y (the vector of real data y1...yn) in the vector space E, whose basis are e and x (the real fixed values of x: x1...xn).




If I understood correctly, e is a unit column vector that's used to be multiplied by a scalar that has the value of the intercept, so that each yi will be defined taking account of the value of the intercept. But what's the usefulness of using a multiplied by a vector e=(1...1) instead of using directly a column vector a=(a...a)?
The formula is: y(hat)=ae+bx
Returning to the topic of the orthogonal projection, then my notes say that to find y(hat)=ae+bx such that y−y(hat) is orthogonal to all the elements of E is equal to find the researched value of y(hat) and to to do that it's enough checking if y-y(hat) is orthogonal to the elements of the basis: e and x, so that its product by e and x is 0.
It can be expressed in the system:



My question is, what's the meaning of M(a b)=z, and (a b) = M-1z? How they obtained it? What's the meaning of the content of M and z? I can guess that M^(-1) is the inverse of M and e^t is the transpose of e, but I still don't understand the matrices that are written there, and their role.
 
Last edited: