There is a point that does not seem to be explained in detail in Bayesian inference, or I am not looking at the right resources.

The likelihood in Bayesian inference is where the observed data modifies the prior probability, and this fact is stressed in every book and course. However, the likelihood also seems to be the place where an assumption about the original distribution of random variable is made.

Let's assume we're trying to find the populatin mean for a variable of interest using Bayesian inference. We have a set of observations which will take their place in the likelihood as P(y|mean), where y is a single observation.

Although there is a large amount of material and discussion about the choice of prior for the distribution of the parameter, being mean here, I have not seen a lot of resources about the assumption of distribution for the population parameter, which is population mean in this context.

Most of the resources depict scenarios where you have a sample taken from a normal distribution. When the screen is set (you either know or do not know the population mean or variance), you perform inference, but how on earth you decide or assume that the data at hand comes from a particular distribution?

If I'm not getting the whole thing wrong, you have to make an assumption about the population parameter, since that's the way you're going to calculate the likehood. What if the normality assumption does not hold? In frequentist approach, this is not a problem, since even samples from non-normal distributions are usually distributed normaly.

For the Bayesian however, we are making an inference directly about the population parameter, not the sampling distribution, and to get the likelihood of observation we need an assumption for distribution of population parameter.

The question is: what are the methods to choose a distribution for the main population?

Of course I may be misinterpreting Bayesian inference, and to be corrected would be a relief, for I am quite confused at the moment.

All the best