Direct Observation Assessment of Ultrasound Competency Using a Mobile Standardized Direct Observation Tool Application With Comparison to Asynchronous Quality Assurance Evaluation

使用移动标准化直接观察工具应用程序对超声操作能力进行直接观察评估,并与异步质量保证评估进行比较

阅读:1

Abstract

OBJECTIVES: Competency assessment is a key component of point-of-care ultrasound (POCUS) training. The purpose of this study was to design a smartphone-based standardized direct observation tool (SDOT) and to compare a faculty-observed competency assessment at the bedside with a blinded reference standard assessment in the quality assurance (QA) review of ultrasound images. METHODS: In this prospective, observational study, an SDOT was created using SurveyMonkey containing specific scoring and evaluation items based on the Council of Emergency Medicine Residency-Academy of Emergency Ultrasound: Consensus Document for the Emergency Ultrasound Milestone Project. Ultrasound faculty used the mobile phone-based data collection tool as an SDOT at the bedside when students, residents, and fellows were performing one of eight core POCUS examinations. Data recorded included demographic data, examination-specific data, and overall quality measures (on a scale of 1-5, with 3 and above being defined as adequate for clinical decision making), as well as interpretation and clinical knowledge. The POCUS examination itself was recorded and uploaded to QPath, a HIPAA-compliant ultrasound archive. Each examination was later reviewed by another faculty blinded to the result of the bedside evaluation. The agreement of examinations scored adequate (3 and above) in the two evaluation methods was the primary outcome. RESULTS: A total of 163 direct observation evaluations were collected from 23 EM residents (93 SDOTs [57%]), 14 students (51 SDOTs [31%]), and four fellows (19 SDOTs [12%]). The trainees were evaluated on completing cardiac (54 [33%]), focused assessment with sonography for trauma (34 [21%]), biliary (25 [15%]), aorta (18 [11%]), renal (12 [7%]), pelvis (eight [5%]), deep vein thrombosis (seven [4%]), and lung scan (5 [3%]). Overall, the number of observed agreements between bedside and QA assessments was 81 (87.1% of the observations) for evaluating the quality of images (scores 1 and 2 vs. scores 3, 4, and 5). The strength of agreement is considered to be "fair" (κ = 0.251 and 95% confidence interval [CI] = 0.02-0.48). Further agreement assessment demonstrated a fair agreement for images taken by residents and students and a "perfect" agreement in images taken by fellows. Overall, a "moderate" inter-rater agreement was found in 79.1% for the accuracy of interpretation of POCUS scan (e.g., true positive, false negative) during QA and bedside evaluation (κ = 0.48, 95% CI = 0.34-0.63). Faculty at the bedside and QA assessment reached a moderate agreement on interpretations noted by residents and students and a "good" agreement on fellows' scans. CONCLUSION: Using a bedside SDOT through a mobile SurveyMonkey platform facilitates assessment of competency in emergency ultrasound learners and correlates well with traditional competency evaluation by asynchronous weekly image review QA.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。