Data-fusion / Meta-analysis

I have three separate test I perform over the course of an interview. At the end of the interview the tests analyze their respective data and spit out an assessment that says the person being interviewed was excited by the interview or annoyed by the interview along with a confidence rating.

For example: Test/Method 1 says it is 83% sure the interviewee was annoyed during the interview.

The three tests are not all equal in terms of their accuracies. Test 1 is about 60% accurate Test 2 is about 80% accurate and Test 3 is about 54% accurate.

What I'm looking for is a way to combine the results of these three tests in some way to produce a result that is more accurate than looking at the three results in isolation. Any ideas?


Less is more. Stay pure. Stay poor.
Meta-analysis does not seem like the right direction.

Would you be interested or is it possible, to see if individual components of test function better or worse. then use those for a new instrument. How did you know accuracy without straightout asking them - since this will be important in fusing tests together. Perhap something along the lines of receiver operator characteristic curve may be of interest.

Maybe someone else could chime in, but if you knew there their true interest level, you could work backwards and construct a bayesian (rule) pathway to better understand the tests.