Multiple rotating adjudicators inter-rater reliability

#1
Hello all,

I am analyzing data from a clinical study. In this study, each case was adjudicated by two to three adjudicators. The study had a panel of six adjudicators. Each case was adjudicated by two primary adjudicators. In case of a disagreement between the two adjudicators, a third adjudication was introduced to break the tie.

Below is a data sample. I am using R for analysis

Code:
adjdata <- data.frame(

J1=c('Yes',NA,NA,'Yes',NA,'Yes',NA),
J2=c('Yes','Yes',NA,NA,'Yes','Yes','Yes'),
J3=c(NA,'No','No',NA,'No',NA,'Yes'),
J4=c(NA,'No',NA,'Yes',NA,NA,NA),
J5=c(NA,NA,'No',NA,'Yes',NA,NA),
J6=c('No',NA,'Yes',NA,NA,'No',NA)
)


J1   J2   J3   J4   J5   J6
1  Yes  Yes <NA> <NA> <NA>   No
2 <NA>  Yes   No   No <NA> <NA>
3 <NA> <NA>   No <NA>   No  Yes
4  Yes <NA> <NA>  Yes <NA> <NA>
5 <NA>  Yes   No <NA>  Yes <NA>
6  Yes  Yes <NA> <NA> <NA>   No
7 <NA>  Yes  Yes <NA> <NA> <NA>
What is the appropriate statistical method to calculate the inter-rater reliability for such study design?

I tried running Fleiss kappa but I kept the following error

Code:
irr::kappam.fleiss(adjdata)
Error in ratings[i, ] : subscript out of bounds

I added some additional research and Krippendorff's alpha appears to be an appropriate statistical measure of agreement
I would appreciate any guidance or feedback.