- Thread starter fed2
- Start date

E(betahat) = slope of g at mean X + bias term

where the bias term is the slope of the linear regression relating the Remainder term in the taylor approx to X.

I swear i've seen this sort of theory before somewhere, but it is hard to google up again.

Maybe some R code explains better

C-like:

```
#logistic fun
g = function(x){
1/( 1 + exp(-x) )
}
#first derivative
g_dot = function(x){
g(x)*(1 - g(x) )
}
#first order approximation
tangent = function(x,a){
g(a) + g_dot(a)*(x - a)
}
#fit slope to logistic experiment;
runSim <- function(j){
X <- runif(10,-1,1)
gX = g(X) + rnorm(10,0,.1)
betaHat = coef( lm(gX ~ X) )[[2]]; #Fit linear regression to logistic;
#the bias is slope of SLR relating errs to X;
errs = tangent(X,0) - gX;
bias = coef( lm(errs ~ X) )[[2]];
data.frame( betaHat=betaHat, bias = bias)
}
mySims = do.call( 'rbind', lapply(1:1000, runSim) )
print( sprintf( 'expected slope at mean X = %f', g_dot(0) ) )
print( sprintf( 'slope of slr %f', mean(mySims$betaHat ) ) )
print( sprintf( 'predicted bias %f', mean(mySims$bias ) ) )
#so fitting linear reg gives g_dot(0) + a bias related
#to the erro of first order taylor approx?
```