Validity and reliability tests of ordinal self-reports?

#1
I am a graduate student in psychology and I find myself completely stumped about choosing meaningful statistical tests for my data :confused: . The purposes of the statistical tests would be to measure the frequency dispersion of self-reports on a 7-point ordinal scale and compare the results to their "right answers" which are on a 12-point ordinal scale. Essentially, the question hoping to be answered is, are the ordinal self-reports valid when compared to a given government assigned value which is on a different scale (but of the same range conceptually). The responses are not normally distributed, have zeros in about 30% of the fields in a 12x7 cross tab, and are highly skewed at the ends.

What is the best way to do this and how? For measuring the dispersion is there a measurement that is meant for ordinal data specifically? Standard deviation doesn't seem appropriate because the data is not interval.

I have looked into Kappa, but am unsure if this is down the right path for the comparison between the two variables. The statistics are being run in SAS if anyone was comfortable with sharing specific commands.

Thank you all in advance for your help. I apologize for immediately posting a question and not helping others first, but I am eager to learn from you.
 
#2
What do zeros represent in your likert scale? Is that the lowest value or does it have some other meaning?
You can try dichotomizing or trichotomizing both of the scales (self-report and the standard), then check the crosstabs.
Correlations are in order, so is the Cronbach's alpha (you can find both in proc Corr), also you can try factor analysis (proc factor).
If you can recode the standard likert scale into a 7 point scale (to match the self report) then you can use % agreement and kappa scores to validate your scale.

Jenny Kotlerman
www.statisticalconsultingnetwork.com
 
#3
What do zeros represent in your likert scale? Is that the lowest value or does it have some other meaning?
You can try dichotomizing or trichotomizing both of the scales (self-report and the standard), then check the crosstabs.
Correlations are in order, so is the Cronbach's alpha (you can find both in proc Corr), also you can try factor analysis (proc factor).
If you can recode the standard likert scale into a 7 point scale (to match the self report) then you can use % agreement and kappa scores to validate your scale.

Jenny Kotlerman
www.statisticalconsultingnetwork.com

Thank you for your suggestions so far.

Currently, zeros are not represented in the scale. I haven't tried dichotomizing or trichotomizing both the scales yet, though this is a possibility and pretty simple. Correlations turn up data well for the whole data set, but I am interested in how close answers are to the 'right one' independently for each category. What I want to describe the data would seem methodologically to be very close to a standard deviation. Beyond a standard deviation is there a measure that figures out what part of the responses are chance agreement that way I could use it to analyze each response individually?

Recoding the data from a 12-7 point scale in order to do a % agreement and kappa would add a confound to this reliability test that I'm uncomfortable with.