Bimonthly, Established in 1959
Open access journal

Kappa coefficient: a popular measure of rater agreement

Affiliations 

Free PMC article

Erratum in

  • Errata.

    [No authors listed]Shanghai Arch Psychiatry. 2015 Apr 25;27(2):89.PMID: 26120258 Free PMC article.

Abstract

In mental health and psychosocial studies it is often necessary to report on the between-rater agreement of measures used in the study. This paper discusses the concept of agreement, highlighting its fundamental difference from correlation. Several examples demonstrate how to compute the kappa coefficient – a popular statistic for measuring agreement – both by hand and by using statistical software packages such as SAS and SPSS. Real study data are used to illustrate how to use and interpret this coefficient in clinical research and practice. The article concludes with a discussion of the limitations of the coefficient.

Keywords: correlation; interrater agreement; weighted kappa.

PubMed Disclaimer

Similar articles

Cited by

References

    1. Spitzer RL, Gibbon M, Williams JBW. Biometrics Research Department: New York State Psychiatric Institute; Structured Clinical Interview for Axis I DSM-IV Disorders. 1994
    1. Cohen J. A coefficient of agreement for nominal scales. Educ Psychol Meas. 1960;20(1): 37–46.
    1. Duberstein PR, Ma Y, Chapman BP, Conwell Y, McGriff J, Coyne JC, et al. Detection of depression in older adults by family and friends: distinguishing mood disorder signals from the noise of personality and everyday life. Int Psychogeriatr. 2011;23(4): 634–643.. doi: 10.1017/S1041610210001808. – DOI – PMC – PubMed
    1. Tang W, He H, Tu XM. Applied Categorical and Count Data Analysis. Chapman & Hall/CRC; 2012.
    1. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33: 159–174. doi: 10.2307/2529310. – DOI – PubMed