Abstract
BACKGROUND: The accuracy and safety of generating medication orders by large language models (LLMs) must be demonstrated. Without standardization, performance evaluation is limited to time and resource-intensive clinician grading. This evaluation aimed to develop a standardized medication format that supports automated performance evaluation (MedMatch). METHODS: First, a survey of 40 medication prompts was given to clinicians to assess agreement in medication order communication. Second, a clinician panel developed a standardized medication format (MedMatch) for oral and intravenous medications. Third, a clinician-annotated dataset of medication prompts and standardized answers in the MedMatch format was developed for LLM testing. Finally, LLMs were retested with the same dataset, adjusted to exclude route information, to evaluate the appropriate categorization of medication route. RESULTS: The formal medication orders consistently showed low omission rates and high overlap for all entities, compared to the verbal and brief written communication types. Lexical overlap results demonstrated pattern norms amongst clinicians with entities appearing most commonly in positions 1-5 in the order of drug name, dose, unit, route, and frequency. In the second survey, the formal written group performed the highest with 78.3% of prompts considered appropriate as a computer-generated response. LLM accuracy on MedMatch order standardization was highest in oral solid (64.2-72.5%), intravenous intermittent (72.5-84.3%), and intravenous push (62.7-74.5%) categories. LLMs performed the worst at categorizing medication orders accurately into intravenous push (18-61%) and intravenous intermittent (51-100%) routes. CONCLUSIONS: Standardized format for computer-based outputs may support automated performance analysis and enhance the clarity of medication communication.