- Thread starter Omerikooo
- Start date

Actually you are right. My main problem is that my intra-reliability is actually perfect but due to intrinsic properties of kappa statistics the coefficient can't be calculated.

Since my variables are categorical adding values like 0.1 would change them into intervals (i.e. 1.1) and in that case Kappa can't be calculated.

My solution can be to present percent agreement (100% are the same, 90% are the same etc.) but this is not a good way to report agreement.

A solution can be an other test which still works even if the rater gives constant points. I failed to find any, unfortunantely.

I don't understand how intra-reliability, or anything in real data can ever be perfect. You mean two raters agreed every time? That does not seem likely.

But if it is true, do you really need a test statistic for interrater reliability? You have two clones

But if it is true, do you really need a test statistic for interrater reliability? You have two clones

I agree that there is no need for any other test for this, because it is perfect anyways.

I somehow solved the problem with another approach which is not related to statistics.

Thanks!

Haha, you are right. Intra-rater reliability refers to measurements of one measurer on different time points. My results means that the measurer gave the same values for the same variable in both time points. This is possible since the measurement is easy and the variable is binary.

I agree that there is no need for any other test for this, because it is perfect anyways.

I somehow solved the problem with another approach which is not related to statistics.

Thanks!

I agree that there is no need for any other test for this, because it is perfect anyways.

I somehow solved the problem with another approach which is not related to statistics.

Thanks!

If it can't work and you can't find a fix, just report a correlation coefficient with confidence intervals - that would convey to your audience the matching of values. Also, provide a dataframe of example data, so we know what you are working with. I would imagine some type of weighted accuracy value could be derived using a contingency table if you are working with discrete continuous values.

If it can't work and you can't find a fix, just report a correlation coefficient with confidence intervals - that would convey to your audience the matching of values. Also, provide a dataframe of example data, so we know what you are working with. I would imagine some type of weighted accuracy value could be derived using a contingency table if you are working with discrete continuous values.

Only one rater did two ratings in different time points.

Correlation coefficient wouldn't work in my case since I have binary variable with only 0 and 1.

If this was some kind of continuous variable I would use ICC (intra-class coefficient) and it would cause no problem.

In my example one of the column is just ones, this causes the problem.

An easy approach would be to present percent match only but I don't like that solution.

I always imagine there is one for every occasion - like hats.

https://core.ac.uk/download/pdf/82193386.pdf