I'm afraid that you have problems. Basically you are trying to discover too much all at once.
If you google multiple p corrections you will find quite a few ways folk have devised to improve on Bonferroni, but realistically they won't help much in your situation.
If you are still in the planning stage, and haven't collected your data yet, you might think about redirecting your efforts into just a few key questions.
If you google multiple p corrections you will find quite a few ways folk have devised to improve on Bonferroni, but realistically they won't help much in your situation.
If you are still in the planning stage, and haven't collected your data yet, you might think about redirecting your efforts into just a few key questions.
So the survey has gone out and data collected (this is one of the problems have distance supervisors I guess!)
- What’s the max number of tests you could run before the issues we’re talking about?
- Could I run inferential statistics on only some of my questions? It would lead to massive limitations, but surely better than doing none (I’ve seen other surveys where they just report descriptives).
- I could also limit / remove some comparisons. For example, I don’t really need age and years’ experience. I would argue that the latter is more important. Additionally, I was advised to “just put a COVID question(s) in”… it doesn’t really add to the project itself. So could reduce these down as a starter.
So if sample size is large enough, then sex, age, current role, years in role, level of qualification could jointly
be used to predict the other variables, using multiple linear regression, or logistic regression (for yes/no
variables). That would reduce the number of analyses.
Karabiner
be used to predict the other variables, using multiple linear regression, or logistic regression (for yes/no
variables). That would reduce the number of analyses.
Karabiner