Comparing 95% confidence intervals between groups: Krippendorff’s alpha

#1
I am performing an interrater reliability analysis. My analysis has several nominal variables that are being graded and I’ve used Krippendorff’s alpha to generate my interrater reliability statistic. I want to compare reliability between two groups (trained differently). I have 95% confidence intervals produced from bootstrapping via the SPSS macro for KALPHA. Is there a more meaningful way to compare these two groups other than looking for overlapping confidence intervals? Ideally something that would generate specific p-values would be nice :)
Thanks for any suggestions
 
#2
To get a p value you need to find the distribution of differences between the Kalphas for each group and this can be done by bootstrapping. You will need to modify the SPSS code or write your own or use some other program. Put the two groups side by side as one big group and find Kalpha for each group. Find the difference and record it in a column. Resample the data keeping the rows intact. Repeat. Find where 0 sits in your difference distribution. Calculate a p value.