**Re: Probability that Bernoulli p-parameter is greater than some value, given N sample**
In the Bayesian setting, how would I proceed. Do I need to assume a prior distribution on p? And for the hypothesis test, what is the way to go there?

Yes you need to assume a prior.

The beta family of priors are the conjugate family for binomial observation, meaning that if the prior is beta so is the posterior. Hence it is easy to find the posterior without worrying about any integration, which is one reason to choose a prior from betafamily.

Specifically if the prior is beta(a,b) then the posterior is beta(a+y,b+n-y) where n is the number of observations and y is the number of successes.

Where \(g(p,a,b) \propto p^{a-1}(1-p)^{b-1}\)

If you have no prior knowledge about the value of p one option is to use a uniform prior which is the same as beta(1,1).

It is common to used the expectation of p with respect to the posterior distribution as an estimate of p, that is \(E[p\lvert y]\).

And having the distribution of p given the data you can calculate probabilities Pr(P < p) and construct credibilityintervals which is commonly used for a Bayesian analog to hypothesistesting.