A primer on reliability testing of a rating scale

评级量表可靠性测试入门

阅读:1

Abstract

In this article, the second of a series on rating scale translation, adaptation, and psychometric testing, we focus on reliability testing of a rating scale. Reliability refers to the consistency of results when the scale is reapplied to or completed by the same individual again under the same conditions. We discuss three key types of reliability: internal consistency, test-retest reliability, and inter-rater reliability testing. The appropriate measure for reporting internal consistency is Cronbach's alpha (α); for test-retest reliability, it is the intraclass correlation coefficient (ICC) for continuous variables and intraclass kappa for categorical variables. For inter-rater reliability, the preferred measure is either Cohen's kappa (κ) in case of categorical variables with two raters or the ICC for continuous variables; depending on the randomness in the selection of raters, different statistical models are used for computing the ICC. This article presents these concepts with simple, non-technical explanations. We also address practical considerations for conducting reliability tests, explain how to choose the right statistical index for each type of reliability, and clarify common misapplications. Finally, we offer guidance on interpreting and reporting reliability test results in a manuscript, along with instructions on conducting these analyses using IBM SPSS Statistics.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。