If so does anyone know how?

I don't know! But this is my thoughts.

So there has been a movement back toward linear probability models in cases in which multi-group comparisons are of focal interest. The rational for this is that cross-group comparisons of parameter estimates get shaky in probit and logit models requiring a bunch of finessing to get right.

The usual logit model with link function g( ) is often written like:

g(p) = log(p/(1-p)) = beta'x

As I understand it, the linear probability model is just a model with identity link:

p = beta'x

Maximum likelihood estimates in generalized linear model is iteratively reweighted least squares:

(X'W(t)X)*beta(t+1) = (X'W(t)y) with conventional nomenclature.

where the weights are updated in each round. But this just takes care of the increasing variance as p gets closer to 0.5, not the sampling design.

Let's write the model as:

Y = beta'x + eps

Where Y is still binomial distributed with a mean of beta'x and an awkward disturbance term eps.

Now suppose that there is a complex sampling design so that there is a random selection variable S (selected or not selected) and that the probability of S is not constant as in simple random sampling.

Now my point is that if the disturbance term in the regression model eps and S are statistically independent then, my guess is, that you can ignore the complex sampling design and estimate it as if it was a simple random sampling.

If they are independent the likelihood would just multiply the densities:

L = f(s)*f(y;beta) and in the log-likelihood the sampling would just be an additive constant. logL= log(f(s)+ log(f(y;beta)

But if the sampling design in not independent, so that the sampling probability is a function of beta, then maybe that can be modelled in the likelihood:

L =f(s, y; beta)

If it is estimated with ML (and for me ML is maximum likelihood!) the variances and covariances can be found in the information matrix.

But I believe that this has been solved a long ago. But I guess that the results have not been used very much.

Lazar wants cross-group comparisons of parameter estimates. Then I guess that he need not only the standard error but also the variances and covariances of the parameter estimates.

I don't know much about bootstrapping in this case. But I would guess that the maximum likelihood estimates would be more precise.