Converting interview response data to aggregate research data


Active Member
I was a research subject in a data gathering interview, and it involved a lot of questions of the form "On a scale of one to ten..."

I found my answers to the questions asked were uncomfortably arbitrary, and believed the results would probably have to go through some sort of normalization procedure. Something along the lines of weighted scaling of each numerical response, according to frequency of that response. Being mathematically inclined, my dissatisfaction with my own answers irked me, and I declared the process to be less accurate than it portends.

But what then could be done? So I endeavored to imagine a scenario where I would be quite comfortable with my answers, and here's what I came up with.

Allow for only three responses: Yes, No, and Indeterminate. Then during compilation of the answers into a set, the scale of 1-10 (or 100 or 1000...) would become apparent as a facet of the proportions the answers presented.

It then occurred to me the science of research may have already developed better solutions than were used in the field that day. So I've come here to ask: How do you process response data?
Last edited:


Less is more. Stay pure. Stay poor.
Depends on the purpose of the instrument and questions. I always say everybody thinks they can do a survey, but there are actual people out there with a doctorate in surveys sciences.


Active Member
basically what i know about survey instruments like sf36 and friends can be summarized as:
1) You add up the responses to the questions to get the result for the instument or 'subscale'.
2) Smart people make instruments and then sell them to you. they use 'cronbach's alpha' to do this, sometimes, i guess.

And that's where babys come from.