Distribution assumptions

noetsi

Fortran must die
#1
This contradicts a lot of what I have read over the last decade plus

"A 1-unit increase in log(x)log⁡(x) is actually multiplying xx by ee (almost 3), which is a very different thing from adding 1 unit to xx. So the meaning of the estimated coefficient is totally different. You never need to transform your predictors to meet assumptions, as there are no assumptions on their distribution. Only outcomes have distributional assumptions in OLS. "

They argue that transforming a variable creates a totally new model as if you were adding or subtracting a variable.

Also it is commonly suggested to change predictors specifically to deal with the normality assumption. I know it is the residuals that matter here, which they don't really make clear, but the point is you are changing the predictor to deal with this. I have never heard it argued you should not do so because it creates a new model nor that you could not interpret it, assuming back transformations, in terms of the original model.

This one blew me away.
 

hlsmith

Less is more. Stay pure. Stay poor.
#2
Well get your head out of that Florida sand :)

Also, who is conducting a log(x)*log(x) transformation in OLS?

Whenever you back transform you are losing some info, since it usually isn't like going from centimeters to meters you are messing around with linearity/growth transformations. They make a good point about what you are actually trying to do.
 

noetsi

Fortran must die
#3
I have read so many articles asserting you should transform to deal with normality (in the residuals) that this one sent me into...sand. Why would ever transform to deal with normality with any reasonably sized data base given what is being said here?