An AI-Powered Strategy for Managing Patient Messaging Load and Reducing Burnout

利用人工智能管理患者信息量和减少倦怠的策略

阅读:2

Abstract

This study aims to evaluate the impact of using a large language model (LLM) for generating draft responses to patient messages in the electronic health record (EHR) system on clinicians and support staff workload and efficiency.We partnered with Epic Systems to implement OpenAI's ChatGPT 4.0 for responding to patient messages. A pilot study was conducted from August 2023 to July 2024 across 13 ambulatory specialties involving 323 participants, including clinicians and support staff. Data on draft utilization rates and message response times were collected and analyzed using statistical methods.The overall mean generated draft utilization rate was 38%, with significant differences by role and specialty. Clinicians had a higher utilization rate (43%) than scheduling staff (33%). Draft message usage significantly reduced all users' message response time (13 seconds on average). Support staff experienced a more substantial and statistically significant time saving (23 seconds) compared to negligible time savings seen by clinicians (3 seconds). Variability in utilization rates and time savings was observed across different specialties.Implementing LLMs for drafting patient message replies can reduce response times and alleviate message burden. However, the effectiveness of artificial intelligence (AI)-generated draft responses varies by clinical role and specialty, indicating the need for tailored implementations. Further investigation into this variability, and development and personalization of AI tools are recommended to maximize their utility and ensure safe and effective use in diverse clinical contexts.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。