Likert item analysis

#1
Hi Community!

I have to analyze the data of a questionnaire in which 180 participants were asked to rate the importance of 14 isolated factors with the help of a Likert scale ( from 1 = not important to 5=very important). Amongst others, participants were also asked the following:

- how many years of experience they have with a particular type of project (1-3; 3-5 etc.)
- how many employees work for their company (1-49, 50-250 etc.)
- how they would describe the implementation status of a particular manufacturing concept
- Educational background, type of job etc.

Now, I would like to find out how those variables like job experience affect the rating of each factor. I already checked the forum and other sites like "Researchgate" to get a general understanding. Also, I used a decision tree that proposed to use the Kruskal-Wallis H Test. While the test returned some differences between certain groups, there should be more differences. I suspect N being too small so not every group is sufficiently represented.

Have you guys any other ideas on how I could analyze the questionnaire with SPSS?
 
Last edited:

katxt

Active Member
#2
Let's just look at one typical test you are planning? Say Experience and Factor1. It looks like you have split the subjects into several Experience groups. The question is "do at least two of the various groups score noticeably differently on Factor1?" One approach is certainly Kruskal-Wallis test, which will tell you if there is an established difference between at least two of the groups. You then have to do post hoc tests (Mann-Whitney, say) to find out which groups are different.
The big problem with this approach is multiple p values which lead to false positives. You have 14 factors, and at least 5 sets of groupings leading to at least 70 K-W tests. Your results are going to be full of false positives unless you set the cutoff level for significance at much less than p = 0.05, say p = 0.005 at the most. The post hoc tests then need to be even lower, say at p = 0.001.
Another approach, if the groupings fall naturally into a scale, is to do Spearman correlations with the factors, but the problem with multiple p values remains.
I suspect that the problem is not so much N is too small, but that the number of factors and groups is too large. kat
 
#3
Thank you very much, Kat, I appreciate your support!

Since I am doing my first steps in statistics, could you explain why multiple p values lead to false positives? Also, could you please explain what you mean with "if the groupings fall naturally into a scale" and how I could check this?

Sorry for all the questions, I just want to be sure that I understand everything correctly.
 

katxt

Active Member
#4
The common p < 0.05 cutoff for significance acknowledges the possibility that there is no real difference anywhere but just by bad luck your sample looks as if there really is a difference. The tests and cutoffs are designed so that you will claim there is a real difference when there isn't one less than 0.05 or 5% of the time. This is just a risk you have to take. It's like playing Russian roulette with a 20 chambered gun and one bullet. There is a 1 in 20 chance that you will shoot yourself in the foot by finding a false positive. That's not bad odds. In your case you have 70 tests so it's like spinning the chamber and pulling the trigger 70 times. These are not good odds any more. You are going to shoot yourself in the foot several times. To keep the probability of a bad shot down to the acceptable 5% with 70 shots you need a revolver with far more chambers, say 200. Google Bonferroni correction.
Some groups fall naturally into a scale - length of service, or years of college education. These can sometimes be converted into a scale rather than be left in groups. Check by asking if there is a natural numerical progression through the groups. Then you can try correlation.
Other groups like sex or ethnicity, or type of job don't make a scale, and you have to use groups. Then you're stuck with K-W.