Checking significance of bias.

In a new design, estimator seems to be biased but efficient than the existing design. Now I want to check whether the bias of the estimator based on new design is significant.

The formula of the variance of the estimator based on new design is intractable.

How can I check the significance of bias of the estimator based on new design?


Less is more. Stay pure. Stay poor.
Use simulated data, so true effects are known and thencompare error estimates.

P.S., Bootstrap may be your friend here.


Less is more. Stay pure. Stay poor.
Not a wizard in R or in simulation. But you would create the true data generating process:

P(X1=1) = 0.6
P(X2=1) = 0.4
P(Y = 1 | X1, X2)

Then fit the traditional approach and your novel approach and get estimates. Depending on your model (e.g., linear, logistic, etc) you get your error parameter (e.g., RMSE, accuracy, etc.) as well. To further put it into perspective, you can get 95% percentile confidence intervals via bootstrapping. Next overlay the two histograms. You can also mark on the figure what the true effect was using a reference line. This approach would contrast the results from the two model specifications.

One thing to note is that your results would only be for one possible data generating process, simulation. Meaning the bias may be different for other data generating processes, say when other covariate distributions are used, interaction terms, etc. This is a general heuristic approach for intractable problems I believe. It can help in trying to discern the direction and magnitude of bias.