Smearing Transformation

#1
I've found several papers on the smearing transformation for \(ln(x)\). However, I"m curious if there is one for \(ln\frac{x}{1-x}\). I can't find anything, but thought I would ask here before giving up.
 

Englund

TS Contributor
#2
I can't answer your question, but I can give some insights into smearing estimators and why they are desirable, for those who aren't familiar with it. Consider we have a model of the following form:

\(ln(y_i)=\alpha+\beta x_i+\varepsilon_i\).

If this is the true model, we can attain unbiased estimates of \(\alpha\) and \(\beta\), \(a=\hat{\alpha}\) and \(b=\hat{\beta}\). Then we can get unbiased predictions of \(ln(y_j)\) as follows:

\(\hat{ln(y_j)}=a+bx_j\). It is easy to show that \(\hat{ln(y_j)}\) is unbiased,

\(E[\hat{ln(y_j)}]=E[a+bx_j+e_i]=E[a]+Ex_j+E[e_i]=\alpha+\beta{x_j}\)

Since we assume that \(E[\varepsilon_j]=0 \forall j\) this term vanishes. But what happens if we would want an unbiased prediction of \(y_j\) instead of its logarithm? Due to Jensen's inequality, we have that \(g(E[X]) \leq E[g(X)]\). So we do not get an unbiased estimate of \(y_j\) by taking \(e^{ln(y_j)}=e^{a+bx_j}\). Now, due to Jensen's inequality,

\(E[e^{a+bx_j+\varepsilon_j}]=E[e^{a+bx_j}]E[e^{\varepsilon_j}] \neq E[e^{a+bx_j}]\)

since \(E[e^{\varepsilon_j}] \geq e^{E[\varepsilon_j]}=1\). So this is the reason to use smearing estimators, such as Duan's smearing estimator and other similar techniques.
 

Englund

TS Contributor
#4
You could run a simple simulation to get a better grip of the problem. Estimate a model, solve for \(x\) and repeat simulation \(M\) times.