Does anyone have suggestions for adjusting scores from a psychometric assessment that appear to suffer from ERS? I'm looking for an applied approach, perhaps a formula that I can use in Excel to adjust the data.
Assessment has 48 likert scale items, using 7pt agreement scales. I have performed CFA with a validation set of 287 respondents. I'm using deviation IQ to transform the scores. However, I noted that some respondents tend to answer all in the 6 or 7s, or all 3s. Or all 1s and 2s. I have read a few articles on ESR and ASR, but I'm not sure how to actually apply the theory to actual data and correct for these forms of bias. So, I already have the data. This would be a retrospective correction. I'm frankly not even sure it's needed. I'm not sure what the standard practice is in psychometrics.
I have actually never addressed the issue, instead I will report sample description and only generalize to those types of individuals. What are the approaches you have come across? I would image some types of up weighting and possibly sub group bootstrapping.
I was taught back in grad school that large enough samples of likert items eventually conform to the central limit theorem, and that low scores and high scores balance out. I've never heard of response style until now, and wondered if this was actual issue that "real" psychometricians take into account when providing a score to an individual.