- Thread starter charms
- Start date

It depends on what statistic you are running. If the number of predictor variables exceeds the sample size (and here the intercept is a predictor) than you won't have enough degrees of freedom I believe to run many methods. It is also true that for some methods (such as logistic regression) some of the results are only correct with larger sample sizes (the results are only asymptotically accurate). For regression for example common rules of thumb are to have 100 plus cases with more for every predictor variable (although you can actually run with fewer, it is just dangerous to do so particularly as you violate the assumptions of the method).

Your power will be extremely low so that your type II error rate will be very high. And generalizing will be extremely difficult. Have you considered doing qualitative approaches such as interviews rather than quantitative ones (I don't know of course if your question supports that). I have not seen statistical analysis with just six people before.

It has been pointed out that in a high quality controlled experiment when you have reason to believe the results are generalizable to the larger population my comments above are not valid. They strongly disagree with the rules of thumb which of course are just general approaches which may not apply to specific analysis. Sources I have seen, which focus on observational data in the social sciences, strongly assert the need for large samples, but in other fields the perspective is different.

Your power will be extremely low so that your type II error rate will be very high. And generalizing will be extremely difficult. Have you considered doing qualitative approaches such as interviews rather than quantitative ones (I don't know of course if your question supports that). I have not seen statistical analysis with just six people before.

It has been pointed out that in a high quality controlled experiment when you have reason to believe the results are generalizable to the larger population my comments above are not valid. They strongly disagree with the rules of thumb which of course are just general approaches which may not apply to specific analysis. Sources I have seen, which focus on observational data in the social sciences, strongly assert the need for large samples, but in other fields the perspective is different.

Last edited:

For regression for example common rules of thumb are to have 100 plus cases with more for every predictor variable (although you can actually run with fewer, it is just dangerous to do so particularly as you violate the assumptions of the method).

Regression is more robust to distributional violations with large sample sizes, but a large sample size is not itself one of the assumptions made. IMO people worry far too much about sample size and not nearly enough about where the sample came from. E.g., if you have no random selection from a population, and no random assignment to conditions, then what are you trying to make inferences

Your power will be extremely low so that your type II error rate will be very high. And generalizing will be extremely difficult.

By coincidence the previous article published in that journal showed that you can legitimately use a t-test with as few as 2 people in each subsample. Note that a student's t-test *is* regression: regression with a single binary predictor.