Cronbach's Alpha and z-scores

#1
I was wondering if someone could help me with this little mystery.

I have three items I want to use for internal consistency. The first two scales are 7 point likert scales that, together, have high reliability. The third scale gets people to enter a dollar value that has no restricted range (i.e. it is continuous). When I add this third item, cronbach's alpha drops close to 0.

However, if I convert the score first to a standard (z-score), reliability jumps to over .70! The correlations stay the same if I run a Pearson's r. Given that I assumed cronbach's alpha was based on correlations, why does the value become reliable?
 

Dragan

Super Moderator
#3

In short, the Pearson correlation coefficient is invariant to linear transformations. Coefficient alpha is a form of an intraclass correlation based on variance components and is not invariant to linear tranformations.

That said, think about Cronbach's alpha in the context of a simple repeated measures design and how it can (alternatively) be computed in terms of the Mean Squares associated with this design. In short, you're changing the variance(s) of the classes (or raters) and thus you will change the value of alpha.