Abstract
Biomedical text in public databases often exhibits unstandardized terminology and inconsistencies that impede machine learning applications and hinder data integration across biomedical databases. Leveraging generalized and specialized transformer/large language models (LLMs) offers a potential scalable solution for terminology standardization. We evaluated this opportunity using the National Institutes of Health Clinical Trials Registry (CTR), which contains heterogeneous, free-text records of disease from therapeutic trials. To systematically assess the ability of machine learning methods to assign biomedical terms accurately, we benchmarked 36 approaches using transformer/LLM-based text embeddings, along with traditional text-matching algorithms, against a clinical gold standard: the World Health Organization Classification of Tumours system (WHO System, also known as the WHO Blue Books). For this evaluation, we developed CANTOS (Clinical Trials Automated Nomenclature and Tumor Ontology Standardization), a computational benchmarking framework that extracts tumor names from the CTR and standardizes them using the WHO System and the National Cancer Institute Thesaurus (NCIt). We assessed standardization accuracy using a random sample of 1,600 CTR tumor names manually annotated with WHO System terms. LLM/transformer-based embedding methods significantly outperformed text-matching approaches: all-MiniLM-L12-v2+Euclidean distance achieved 67.7% accuracy (WHO-5th edition), while LTE-3+Euclidean distance achieved 69.4% (WHO-all editions). Text-matching methods peaked at 32.6% accuracy. A majority voting approach combining three high-accuracy,low-agreement methods improved accuracy to 71.9% (WHO-5th) and 71.6% (WHO-all). Our findings demonstrate the effectiveness of embedding models in standardizing biomedical terminology and provides a reproducible framework for benchmarking machine learning methods against clinical gold standards using real-world datasets.