Abstract
BACKGROUND: Manual submission of clinical trial data to the ClinicalTrials.gov registry is labor-intensive and error-prone, contributing to variability in the completeness and consistency of registry entries. To explore whether recent advances in large language models could support this process, we developed ChatCT, a pilot retrieval-augmented system that drafts ClinicalTrials.gov registry elements. METHODS: We evaluated ChatCT-generated registry elements across three dimensions: 1. semantic similarity to the public ClinicalTrials.gov record, 2. formatting compliance with ClinicalTrials.gov requirements, and 3. coverage of key trial biomedical concepts. RESULTS: ChatCT-generated registry elements were highly semantically similar to human-authored ClinicalTrials.gov records (median BERTScore F1 ≈ 0.82). Formatting compliance was high for structured elements, including Study Design (91% of required fields present; mean completeness 0.897) and Arms/Interventions (75%; 0.772), while narrative sections showed greater variability, including Outcome Measures (79%; 0.929) and Study Description (57%; 0.784). Ontology-based concept extraction and matching demonstrated consistently high precision, with scores ranging from 90% to 100%. CONCLUSIONS: A retrieval-augmented large language model can generate ClinicalTrials.gov registry drafts that preserve essential protocol details and adhere to most formatting requirements. However, light post-processing (e.g., automated schema validation) remains necessary for full submission readiness. This proof-of-concept evaluation suggests that ChatCT-assisted drafting could support registry reporting by improving consistency between protocol documents and publicly reported trial information.