Agreement between raters / inter-rater reliability : Dilemma

#1
Hi,

I have done a test where 2 different raters had to rate 1000 texts independently according to the following options:
- Happy
- Sad
- Angry
- Confused
- Could not tell

Now, I am trying to determine their agreement (i.e reliability). However, I am facing a dilemma -specially when the do not know option has been selected. In particular, my question is shall I include the "could not tell" option for the reliability calculation by using Krippendorff’s Alpha OR should I remove it from my analysis as the "could not tell" option can't be treated as disagreement?

Thank you in advance.