Abstract
Medical large language models (LLMs) perform well on medical NLP tasks, but lack models tailored for cancer phenotyping and diagnosis. Moreover, having tens of billions of parameters increases the computational burden in healthcare settings. To this end, we present CancerLLM, a 7-billion-parameter Mistral-style model trained on 2.7 M clinical notes and 515 K pathology reports across 17 cancer types, followed by fine-tuning on cancer phenotype extraction and diagnosis generation tasks. Our evaluation demonstrated that CancerLLM achieved strong performance on internal benchmarks, with F1 score of 91.78% on phenotyping extraction and 86.81% on diagnosis generation. It outperformed existing LLMs, with an average F1 score improvement of 9.23%. Additionally, the CancerLLM demonstrated its efficiency on time and GPU usage, and robustness comparing with other LLMs. We demonstrated that CancerLLM can potentially provide an effective and robust solution to advance clinical research and practice in cancer domain.