Validate Model with Bayesian Statistics and GMM


we are currently running tests to check the robustness of a model.
The dependent variable in our model is a continuous variable that indicates the time users have spent visiting a website.
This has been logarithmized as it is positively skewed.
The model was previously estimated using the standard procedure of normal linear regression.

To show the robustness of the model we would like to use a different estimation method for the regression. We came across the following 2 methods:
1. applied Bayesian statistics or Bayesian inference, here are 2 links:

2. generalized method of moments:

Unfortunately, we don't know if these are suitable for our project. It may also be that the methods do not make sense at all for our use case. However, if anyone has ever used them to check the robustness of a model or has an assessment of whether one of the ways makes sense, that would help us tremendously.

Thanks in advance :)


Less is more. Stay pure. Stay poor.
Are you familiar with Bayesian approaches? Say I fit a linear model and I want to convey that my estimates are not model dependent, so I do a sensitivity analysis. I typically fit a completely different type of model, something like random forest or quantile regression. So non-parametric.

Bayesian models with flat priors (non-informative) will provide comparable estimates to you linear regression since in principle it is the same thing. Bayesian model comes into play when you have prior information that you want to incorporate into your modeling process or you want probabilistic interpretations of estimates. They likely won't validate, given my above comments. This doesn't mean you can't switch your model approach to Bayesian, but you would not want to do this based on the results you see after the fact of running the model. You would want to make the decision a priori to prevent be prejudiced or biased.