Benchmarking Large Language Models Using a Best Evidence Topic Report in a Patient With Early Non-Small Cell Lung Cancer

利用早期非小细胞肺癌患者的最佳证据主题报告对大型语言模型进行基准测试

阅读:2

Abstract

OBJECTIVES: Large language models (LLMs) are generative-AI which generate text output like a human conversation. We wanted to assess the ability of LLMs to answer patient's questions and benchmark their output using a best evidence topic (BET). METHODS: We asked LLMs whether robot-assisted thoracic surgery (RATS) or video-assisted thoracoscopic surgery (VATS) lobectomy had better perioperative outcomes for postoperative pain, length of hospital stay (LOS) and mortality. A BET was constructed according to a structured protocol for the same questions. An initial search yielded 324 papers, 12 represented the best evidence. RESULTS: LLM outputs are almost instantaneous while a BET took many hours of searching a database for relevant evidence. However, current iterations and models of LLMs did not provide relevant outputs, suffered from hallucinations, and could be restricted by copyright and paywall issues. The BET, on the other hand, was tailored to the scenario by specialist human oversight and therefore more reliable and nuanced. CONCLUSIONS: There were no major differences between RATS and VATS lobectomy for T1cN0M0 NSCLC apart from shorter LOS following RATS. Current LLMs may not be entirely reliable for answering clinical questions. An LLM-BET protocol could be used as a standardized process to compare LLM outputs for different clinical scenarios, each benchmarked with a BET. It can also be used to analyse outputs of different models of current and future LLMs.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。