inter-rater agreement: uses of kappa, pearson's, and intraclass coefficients

#1
Hi everyone:

I really appreciate all of the great posts on here!

I'm trying to wrap my head around a couple of things. First of all, a bit of background:

I am trying to "validate" a questionnaire that was developed against a "gold standard": medical records. The variables are all either dichotomous or continuous. The questionnaire was filled out by participants of a study and the medical records by their physicians. Therefore, 2 sources of information that are being compared for concordance. I plan to use sensitivity and specificity and PPV for the validation piece.

I was planning to use the kappa statistic for the dichotomous variables, but have been reading that there may be some cons to using it (marginalized variables?). I was also planning to use Pearson's correlation coefficient, but am now thinking about using the ICC because I'm trying to look at concordance of two raters agreement on a variable instead of the linear agreement between the two variables.

So, I guess my question is two-fold: am I right in my plan? and what are the pros and cons to using these measurements in this particular scenario?

Thank you so much!