Frequentist v Bayesian

#1
After hours of videos and readings I am still having trouble understanding the fundamental conceptual differences between Bayesian and Frequentist probabilities. Some of the things I have read are contradictory, even from academics who seem like they know what they're talking about.

One of the main issues I am getting hung up on is priors. Frequentists begin with a prior data set of events to predict a result. If the data set expands, the prediction is updated to reflect the new data.
Bayesians begin with a different prior, and then use new data to update the prediction.

While the initial prediction comes from different priors, it seems like the results will converge with the addition of new data. The way the Bayesian approach is explained seems to indicate that the prior is updated and the prediction re-calculated, while suggesting that the frequentist approach is not updated with new data. What am I getting wrong?

I would appreciate any help in clarifying the difference between these two.
 
Last edited:

hlsmith

Not a robit
#2
Yes, frequentist don't use priors or not to confuse you, they assume a flat prior of no effect some may say. Bayesian approaches use priors. This can be flat (noninfluential) or informative. The effect of the priors on the posterior bothered me at first as well until I asked someone and realized the priors have a distribution representing a range and the more information the greater ( smaller) the precision on them. So it is a trade off between how much data you have and how precise the priors are. If you have little confidence info then the priors may be represented by a uniform distribution. Meaning all values are likely. And their influence will be nonconsequential. If the priors are based on a small sample, then it is likely they will have no much precision and like weight.
 
Last edited:
#3
Thank you very much for your reply. So the main difference is the starting point, and not where it goes from there with the addition of new data?
 
#6
What exactly do you mean by this?
Thank you for the question, because trying to answer it made me realize I am more confused than I thought. Plugging numbers into a formula is easy, but understanding the underlying philosophy is more challenging (for me anyways).

I was trying to say that as new data is added to the set you rerun the equations as a dynamic process of inference. But that doesn't make sense. Or at least it only makes sense in particular situations. I think....maybe I'm still just confused.

Hlsmith, thank you for that article. It was very helpful. For anyone interested, I found a video this morning which has a couple of very good examples that helped me, as well. (I don't know the speaker, nor am in any way connected to this video.)

 
Last edited:

spunky

Doesn't actually exist
#7
I think in the main issue to keep in mind in the whole "Frequentist VS Bayesian" 2.0 debate (because it re-surfaces every time Bayes becomes popular) comes from the fact of how they conceptualize probability. For Frequentists, it's the long-run expected events / total sample size VS the Bayesian degrees of belief. Both frameworks to understand probability have been formalized by a few authors and are consistent within Kolmogorov's Axioms.

So it really comes down to how *you* think probability works. But that's beyond the realm of mathematics and has more to do with philosophy and yadda-yadda.
 
#11
If the feelz make you uncomfortable, you just run the model twice, once with informative priors and once with flat priors. You can then see how sensitive your posterior is to the selection of priors.