I informed him that his data did not look normal to begin with (normality assumption),

The statistical significance test for "parametric" models does not assume normally distributed data at all.

In some instances, the errors of the statistical model (the residuals) should preferably be sampled from a

normally distributed population of residuals.

and his samples were not necessarily equal or independent, which would be a requirement for some parametric tests.

Equal sample sizes are useful, but not a necessary assumption.

Independence of observations is an issue which is important for

"parametric" as well as for "nonparametric" analyses.

The main problem here is the very small sample size, which might be an argument

for choosing an analysis which does not require fulfilment of many assumptions.

This struck me as odd, can anyone explain why, even with an n as low as 10, the non-parametric test was more powerful than the parametric one?

Well, you did not describe the data in sufficient detail, you did not describe the actual tests

performed, you did not precisely describe the actual results. So without being a clairvoiant,

this is difficult to answer. If assumptions of a "parametric" test are violated, or if the test

wasn't carried out properly, then the "nonparametric" (here: rank-based?) test can of

course be more powerful.

With kind regards

Karabiner