I need to write a discussion on three approaches of model selection; p-value, Bayes factors (using BIC) and leave-one-out cross validation.

The p-value is not preferred, statiscal significance depends on sample size, p-value is based on imaginary data, depends on the intentions of the researcher. I have good literature about that, but by reading all that I am confused; what is the real interpretation of the p-value if we find a Pr(<Chi) .986 comparing model 1 (H0) and model 2(HA) and what's the correct definition of the hypothesis?

H0 : model 1 is adequaat (enough) ?

HA : model 2 is (more) adequaat ?

The bayes factor overcomes the problems with the p-value, but does it mean that nothing is wrong with this measurement?

I think I prefer leave-one-out cross validation over both methods due to the fact that it is much closer to the data! But what I don't get is what the real advantages are of CV comparing to BF? Does anybody have some good literature about it?

Thanks!