**Re: Why not go with tests dealing with unequal variances/unbalanced sample size first**
I think this is a good question. Say you have a situation where:

1) there is a robust alternative that doesn't add undue complications to the running of the analysis or interpretation of the results

2) the robust test is known to perform well when the assumption of concern is violated

3) the robust test is known not to do substantially worse than the conventional parametric test when the assumption is actually met.

Then it's pretty reasonable to just pick the robust test every time. A good example is a t-test (

here is an article suggesting we just always use the Welch's version, without testing for equality of variances).

But a lot of the time, conditions 1-3 may not be met. E.g.,

1) The robust test may be hard to implement, or change the hypothesis being tested (e.g., rank-based tests such as the Mann-Whitney seem similar to their parametric equivalents, but test quite different null hypotheses)

2) The supposedly robust test may actually not be (e.g., the "asymptotically distribution free" estimator in SEM performs poorly in comparison to more conventional estimators unless you have a massive sample size)

3) The robust alternative may have less power than the parametric test when its assumptions are met.