Abstract
BACKGROUND: The purpose of this study was to evaluate the capability of a large language model (LLM) for performing each of the steps of clinical practice guideline development from framing a healthcare question to creating the evidence-to-decision framework. METHODS: The LLM tool used for this study was OpenAI Generative Pretrained Transformer (GPT)-4o. This evaluation of an LLM was conducted concomitantly with development of a clinical practice guideline on neuromuscular blockade in adults with acute respiratory distress syndrome for the Society of Critical Care Medicine. The Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) handbook provided the steps of the process that served as the outline for this evaluation of GPT-4o. Each request for information posed to the LLM was performed during or soon after the same period as the respective step of the process being conducted by the guideline panel. The results follow the major sections of the GRADE process: framing the healthcare question and selecting and rating the importance of outcomes, summarizing the evidence and quality of evidence, and going from evidence to recommendations. RESULTS AND CONCLUSIONS: The LLM demonstrated the most usefulness for the initial step of the guideline development process that involved framing the healthcare question and selecting and rating outcomes. The limitations of the LLM became most apparent during the remaining steps of the development process.