Hi all!
I have tested 7 similar sound devices.
For each device I asked to evaluate its sound characteristics, using dichotomies, e.g. “cold” “warm”. This evaluation is done with a scale, see below (they had to tick the appropriate vertical line)
cold warm
∣ —— ∣ —— ∣ —— ∣ —— ∣ —— ∣ —— ∣ —— ∣
I had 11 dichotomies.
37 people completed the questionnaire.
My question regards reliability:
When I do a cronbach’s alpha for the whole set (7 devices, 11 dichotomies, 37 respondents), I get an alpha value of .854
This is strange. Because:
2 of the 7 devices where actually the same. So the answers should be correlated when reliably completed, no? Bit this is absolutely not the case.
My question then is:
should I do an cronbach test for each dichotomy separately? so 11 times a cronbach test for (37 people, 1 dichotomy, 7 devices)?
Thanks!
Luc
I have tested 7 similar sound devices.
For each device I asked to evaluate its sound characteristics, using dichotomies, e.g. “cold” “warm”. This evaluation is done with a scale, see below (they had to tick the appropriate vertical line)
cold warm
∣ —— ∣ —— ∣ —— ∣ —— ∣ —— ∣ —— ∣ —— ∣
I had 11 dichotomies.
37 people completed the questionnaire.
My question regards reliability:
When I do a cronbach’s alpha for the whole set (7 devices, 11 dichotomies, 37 respondents), I get an alpha value of .854
This is strange. Because:
2 of the 7 devices where actually the same. So the answers should be correlated when reliably completed, no? Bit this is absolutely not the case.
My question then is:
should I do an cronbach test for each dichotomy separately? so 11 times a cronbach test for (37 people, 1 dichotomy, 7 devices)?
Thanks!
Luc