Abstract
Background and Purpose: We present a pilot benchmark dataset of 378 preclinical histological samples for evaluating large language model (LLM) performance on multi-dimensional classification tasks. This dataset addresses the lack of standardized benchmarks for assessing LLMs in preclinical histopathology, encompassing species identification (mouse, rabbit, rat), organ recognition, staining methods, and preparation techniques. Methods: We evaluated the LLMs GPT-4.1, GPT-4o-mini, and Llama 3.2 on 378 histological samples across four classification dimensions: species identification (mouse, rabbit, rat), organ recognition (kidney, liver, prostate, spleen), staining method classification (H&E, Elastica van Gieson, collagen, iron, IHC-elastin, MOVAT's pentachrome), and preparation type determination (frozen vs. paraffin-embedded). Performance was assessed using sensitivity and specificity metrics with confusion matrix analysis. Results: Model performance varied substantially across tasks and exhibited strong sensitivity to class imbalance. For preparation type classification, GPT-4.1 achieved the most balanced performance (50% frozen sensitivity, 85.7% paraffin sensitivity), while Llama 3.2 failed to recognize paraffin samples (0% sensitivity). In species classification, Llama 3.2 was the only model capable of identifying all three species (rabbit: 75% sensitivity, rat: 85.7% sensitivity) despite poor mouse recognition (0.3% sensitivity). GPT-4.1 achieved higher mouse sensitivity within this dataset (70.4% sensitivity) but failed with minority species. For staining classification, Llama 3.2 demonstrated highest overall performance, achieving >88% sensitivity for most staining types, while GPT-4o-mini showed perfect H&E recognition (100% sensitivity). Conclusions: Current LLMs demonstrate variable performance for histological classification with substantial sensitivity to class imbalance. While not suitable for standalone diagnostic use, they may serve as useful screening tools in research settings with appropriate human oversight.