Other than the fact that they are extremely small (p<<<<<<<< 0.05), you have taken them out of context, so we cannot explain more than that. Note: e-16 is scientific notation indicating that the decimal has been shifted 16 spaces to the right (e.g., 1e-5 = 0.00001).
Absolutely not! The p-value is never an effect size measure.
The statistical test of significance is about "may i decide that there is a non-zero effect in the population?"
It is NOT about how large any effect might be.
I'm trying to imagine the numbers to get such a small p.
I agree with Karabiner. Statistical significance only means that the effect is large RELATIVE TO the experimental error. It does NOT mean that it is of any practical importance. Small p-values can occur, as Karabiner said, from very large sample sizes, or where the experiment deliberately excluded or controlled for virtually all sources of experimental error. When this is done, the results will not typically generalize to a broader population.
Ya got a p ~ 0, n is BIG, and I wrote some wrong. " What do the p-values below tell me?"
If Ho: mu1 = mu2, because variance 1 = variance 2, (F), then teeny p means the neither the variances nor the means are likely equal. "Likely" "suggests" "probably" are stats speak. We can't prove anything with stats. Youse guys haul out extraneous stuff; look at the question. I stand by my answer.,