Bayesian Prediction Intervals

hlsmith

Less is more. Stay pure. Stay poor.
#1
I have been thinking about this every once and awhile - also i will acknowledge that I havent tried to look up the answer myself yet. I thought, I would throw it out here at TS and see what you have to say.

Say i am running a Bayesian regression model based on MCMC. I end up with a posterior distribution and I grab percentiles from it for my credible intervals. However if I wanted prediction intervals for new values, how do I get those based on the model. In frequentist procedures you tweak the CI formula to get the PIs. I guess a comparable question would also be, in frequentist modeling how would you'd get PIs using bootstrapping?

In both of these settings would you use a modified alpha/critical value of interest? In Bayes is there any philosophical reason for PIs to be different than in frequentist approach, thoughts?
 
Last edited:

Dason

Ambassador to the humans
#2
I think conceptually it's super easy to get prediction intervals in a Bayesian setting. You literally have samples from all of the parameters... So for each run of the mcmc just generate a random observation from the data distribution you're interested in using the current parameters.

Easy peesy.
 

hlsmith

Less is more. Stay pure. Stay poor.
#3
just generate a random ob from data distribution you're interested in using the current params
@Dason - do you mean score the new values using the params? If so, then you would find the percentile values of interest across all of the scored values per each run? Lastly, you could do the same thing via the bootstrap in frequentist approaches?
 
Last edited:

hlsmith

Less is more. Stay pure. Stay poor.
#7
So why can you use the posterior as a proxy for the parameter distribution when you have lots of data? Well, didn't some process (Bayes formula) function to integrate and create the distribution of results. Bayes theory says the results are random and data are fixed, right. So the probabilities are taken from the parameter distribution and you can use ROPE to examine the estimate against the comparator and get a marker of success? Or am I rambling about some completely different?
 

ana1

New Member
#8
Yes, but I find a paper which say that if we have an infinite amount of data remain to be collected at an interim analysis, the predictive probability of success of the trial equals the current posterior estimate of efficacy, regardless of the posterior cutoff required for a trial to be a success. Normally, the predictive probability and the current posterior are proportionally,but always our data are finite. I did not see why this is true. Maybe the predictive probability is a limite of current posterior when data towards infinite?
 

hlsmith

Less is more. Stay pure. Stay poor.
#9
Can you provide a link to that source, without seeing it, many times when sample size approaching infinity is referenced asymptotic factors are likely in play.