LREC 2000 2nd International Conference on Language Resources & Evaluation
 

Previous Paper   Next Paper

Title On the Usage of Kappa to Evaluate Agreement on Coding Tasks
Authors Di Eugenio Barbara (Electrical Engineering and Computer Science, University of Illinois at Chicago, Chicago, IL 60607, USA, dieugeni@eecs.uic.edu)
Keywords Annotated Corpora, Intercoder Reliability
Session Session SP2 - Spoken Language Resources Issues from Construction to Validation
Full Paper 206.ps, 206.pdf
Abstract In recent years, the Kappa coefficient of agreement has become the de facto standard to evaluate intercoder agreement in the discourse and dialogue processing community. Together with the adoption of this standard, researchers have adopted one specific scale to evaluate Kappa values, the one proposed in (Krippendorff, 1980). In this paper, I highlight some issues that should be taken into account when evaluating Kappa values. Finally, I speculate on whether Kappa could be used as a measure to evaluate a system’s performance.