Interrater reliability between in-person and telemedicine evaluations in obstructive sleep apnea

阻塞性睡眠呼吸暂停症面对面评估与远程医疗评估之间的评分者间信度

阅读:1

Abstract

STUDY OBJECTIVES: We examined how telemedicine evaluation compares to face-to-face evaluation in identifying risk for sleep-disordered breathing. METHODS: This was a randomized interrater reliability study of 90 participants referred to a university sleep center. Participants were evaluated by a clinician investigator seeing the patient in-person, then randomized to a second clinician investigator who performed a patient evaluation online via audio-video conferencing. The primary comparator was pretest probability for obstructive sleep apnea. RESULTS: The primary outcome comparing pretest probability for obstructive sleep apnea showed a weighted kappa value of 0.414 (standard error 0.090, P = .002), suggesting moderate agreement between the 2 raters. Kappa values of our secondary outcomes varied widely, but the kappa values were lower for physical exam findings compared to historical elements. CONCLUSIONS: Evaluation for pretest probability for obstructive sleep apnea via telemedicine has a moderate interrater correlation with in-person assessment. A low degree of interrater reliability for physical exam elements suggests telemedicine assessment for obstructive sleep apnea could be hampered by a suboptimal physical exam. Employing standardized scales for obstructive sleep apnea when performing telemedicine evaluations may help with risk-stratification and ultimately lead to more tailored clinical management.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。