Abstract
BACKGROUND: Although mechanical ventilation (MV) is a critical competency in critical care training, standardized methods for assessing MV-related knowledge are lacking. Traditional multiple-choice question (MCQ) development is resource intensive, and prior studies have suggested that generative AI tools could streamline question creation. However, the quality of AI-generated MCQs remains unclear. RESEARCH QUESTION: Are MCQs generated by ChatGPT noninferior to human expert (HE)-created questions in terms of quality and relevance for MV education? STUDY DESIGN AND METHODS: Three key MV topics were selected: Equation of Motion and Ohm's Law, Tau and Auto-PEEP, and Oxygenation. Fifteen learning objectives were used to generate 15 AI-written MCQs via a standardized prompt with ChatGPT-o1 (preview model; made available September 12, 2024). A group of 31 faculty experts, all of whom regularly teach MV, evaluated both AI- and HE-generated MCQs. Each MCQ was assessed based on its alignment with learning objectives, accuracy of chosen answer, clarity of the question stem, plausibility of distractor options, and difficulty level. The faculty members were blinded to the provenance of the MCQ questions. The noninferiority margin was predefined as 15% of the total possible score (-3.45). RESULTS: AI-generated MCQs were statistically noninferior to the HE-written MCQs (95% upper CI, [-1.15, ∞]). In additions, respondents were unable to reliably differentiate AI-generated MCQs from HE-written MCQs (P = .32). INTERPRETATION: Our results suggest that AI-generated MCQs using ChatGPT-o1 are comparable in quality to those written by HEs. Given the time and resource-intensive nature of human MCQ development, AI-assisted question generation may serve as an efficient and scalable alternative for medical education assessment, even in highly specialized domains such as MV.