I'm doing Bayesian estimation of model parameters. The goal is to estimate the posterior distribution and get the uncertainty for the parameters. The model has three parameters and is pre-computed for a finite combination of parameters, forming a grid in the parameter space. The output of the model is matched against observables with known Gaussian errors. The inference is done in the usual way, using the likelyhood function

**P(data|model) = PRODUCT[ exp(- ((data_i - model)/error_i)^2) ]**, where

**data_i**and

**error_i**are individual observables with errors. When the number of observables is small, everything works fine: I compute posterior probability distribution over models, marginalize it for each parameter and thus get the posterior distribution for each parameter. However, when the number of observables is large (>10^4), the posterior distribution becomes localized in a region of the parameter space that is much smaller than the separation between pre-computed models in the grid. This causes issues because marginalized cumulative distributions become essentially step functions so no reasonable estimates can be extracted from them. Refining the grid, i.e. computing additional models, although in theory possible, is extremely computationally expensive. Did anyone encounter a similar problem?