kappa

  1. N

    Measuring equivalence/reliability (ICC and consorts)

    For a reliability study I’m administering patient reported outcomes (questionnaires) with 2 modes of administration (on paper and via an electronic device). I’d like to measure equivalence/reliability of the 2 modes of administration. In other words: does the electronic version of the...
  2. A

    High correlation but different median scores. How can this be possible?

    I am writing a research paper. We asked the same patients to fill out a form at two different times. Data is non-normal. I used Spearman's correlation, which gave me a moderate correlation between the two scores. I also used weighted kappa and this too gave me a moderate agreement between both...
  3. D

    Kappa for binary repeated measures

    Hi! I have two raters (patient and physician) who rate the a condition at several visits. I would like to account for inter-individual correlation when I estimate kappa. And I want to assess of there is a trend in kappa. For continuous data, the the cccrm package (R) could work. I´m unsure...
  4. K

    Cohen's Kappa on Categorical Data in R - Gestural Repertoires

    Hi There, First time user here. I am a Master's student of Primatology, I wasn't sure Psychology Stats was the way to go but it seems to fit better than Biostatistics for my question. My thesis is the gestural communication and social cognition of captive northern white-cheeked gibbons...
  5. N

    Interrater reliability test on 3 raters (2 experts and 1 non-expert) on ordinal data

    I need to find the interrater reliability between 3 raters. A nurse (non-expert) performed 60 ultrasound scans and categorised patients into 3 ordinal and also categorical variables (hypovolemic, euvolemic, hypervolemic) according to the scans. The recorded scans were then subsequently...
  6. D

    Kappa SPSS

    Hello All, Thank you in advance if you are able to provide some information on my question. I am attempting to run Cohen's kappa for inter-rater agreement in SPSS. I have 2 raters; however, there are multiple variables (20+) that they are each rating. My question is how do I go about setting...
  7. N

    How to get Kappa in surveys (complex samples) with SPSS??

    Hi, I am working with a stratified sample from a survey (a complex sample). I just tried to measure the agreement between two variables with the kappa test but it seems there is no way to take the nature of the sample into account in SPSS. We know the Kappa value will be the same, but...
  8. M

    Multi-rater agreement statistic

    Hi all, This is my first post question. I have a question on multirater agreement. Raters are given a description of the condition of the each of the same 10 patients and are then asked at what time of the day (Overnight morning afternoon evening none) would their condition be of most...
  9. P

    Inter-rater Reliability With 3 Coders

    Hello, I have three coders who have rated 22 dichotomous (present/absent) variables for 15 open-ended responses. Is it appropriate to use Fleiss Kappa here? And if so, is there a unique syntax for such a case? Thanks to whomever can help, PG BAH, Psychology
  10. L

    Cohen's kappa or Fleiss kappa or both?

    Hi all, I am trying to compare 2 instruments, A and B, against the gold standard. The measurement outcome is dichotomous. 2 different raters, R1 and R2, each uses instrument A and B to rate each subject. So the data looks something like: ID, A_R1, A_R2, B_R1, B_R2...
  11. H

    Interrater Reliability of Dichotomous Variables

    Disclaimer: Not psychological research, I know, but the type of statistics are more often used in psychology than in medicine, so I'm asking for help here. Background: I had 4 radiologists read CT scans of ~50 patients to assess whether a series of 'signs' were present or not. After a washout...
  12. H

    Intra- and Interrater reliability of imaging findings

    Hi everyone, I'm having some difficulty deciding how best to analyze the data from a project. I had 4 radiologist read CT scans looking for specific signs (mostly giving True/False answers) on two occasions separated by a washout period. The last two variables included a 'Questionable'...
  13. B

    Cohen's Kappa Problem

    I am aiming to check the inter-rater reliability of a scale using Cohen's Kappa. I have input the 5 scores as their own variables for Rater A and the same again for Rater B. (1A, 2A, 3A, 4A, 5A and 1B, 2B, 3B etc). In each variable, the scores range from 0-40 and are not categories - just...
  14. J

    Q re Kappa statistic

    I am trying to calculate inter-rater reliability scores for 10 survey questions-- most of which are binary (yes/no). The agreemeant level bettwen the two raters is 70-90% on nearly every question, however the Kappa score is often very poor (0.2- 0.4). Can this be right? Secondly, can you...
  15. J

    Agreement statistic for 2x2 matrix with one structural zero

    I have two observers who watched a series of videos looking for gestures. They recorded the onset-time of any gesture they observed. An agreement was scored when the two observers recorded a gesture onset within 1-second of the other observer. A disagreement was scored when the one observer...
  16. R

    inter-rater agreement: uses of kappa, pearson's, and intraclass coefficients

    Hi everyone: I really appreciate all of the great posts on here! I'm trying to wrap my head around a couple of things. First of all, a bit of background: I am trying to "validate" a questionnaire that was developed against a "gold standard": medical records. The variables are all...