Abstract
PURPOSE: Cohort selection and eligibility screening are critical in clinical research, especially in trials where manual patient matching remains a major bottleneck. This study investigates the use of Natural Language Processing and Large Language Models (LLMs) in two real use cases, namely Atrial Fibrillation (AF) progression and Hearth Failure (HF) decompensation, within a non-English clinical context. We specifically address the following research questions: (1) Can discharge reports and NLP support cohort selection? (2) Can LLMs effectively model longitudinal patient trajectories and temporal reasoning? (3) Do general-purpose or domain-adapted LLMs outperform rule-based baselines for this task? (4) Compared to large foundation models, do small-scale LLMs offer similar performance? METHODS: A dataset of 212 patients was manually annotated for AF progression using discharge reports. Two strategies were evaluated: (1) an adapted rule-based pipeline and (2) zero-shot open-source LLMs with varying prompt structures. To assess generalizability, an additional dataset of 100 patients was annotated for HF decompensation. RESULTS: The adapted rule-based approach achieved the highest accuracy (0.82), but LLMs with task-division prompts performed comparably (up to 0.79), requiring significantly less manual effort. The medium-sized general-domain gemma-3 model outperformed others. CONCLUSIONS: (1) Discharge reports are a valuable resource for automatic cohort selection, with both the rule-based method and LLMs showing promising results. (2) While LLMs struggled with long-context inputs, they handled temporal reasoning well when explicit dates were provided. (3) Larger models did not always outperform smaller ones, (4) prompt language strongly influenced performance, and medical model variants were not consistently superior.