Abstract
INTRODUCTION AND AIMS: Oral lesions are highly prevalent globally, and oral cancer ranks among the most common malignancies, underscoring the need for AI-driven tools to support early detection and triage, especially in resource-scarce settings. This work investigates the capabilities of multimodal large language models (MLLMs) for automated detection of oral lesions in smartphone-acquired buccal mucosa images. Unlike convolutional neural networks (CNNs), which require large annotated datasets and significant computational resources, MLLMs need no task-specific training and can adapt quickly through intelligent prompting architectures. METHOD: We propose a novel expert-informed mixture-of-experts paradigm that mimics the idealistic collaborative medical decision-making approach of clinicians, where each expert module independently retrieves contextually relevant images and the corresponding expert-generated descriptions from the existing data corpus, guided by different similarity metrics. These enriched examples help the expert to form an independent, informed diagnosis. A specialist MLLM then reviews all expert opinions and the image and synthesises a final decision, effectively emulating a consensus diagnosis process. RESULTS: Experimental results on a dataset of buccal mucosa images show that the proposed method attains a sensitivity of 89.81%, making it comparable with existing CNN-based approaches. Additionally, we provide explainability through a detailed interpretation of experimental results and failure case analysis, supported by insights from medical experts. CONCLUSION: The proposed framework represents a human-AI collaborative model, where we do not leave diagnostic outcomes entirely to the model's internal representations. Rather, it actively shapes the model's reasoning through curated, expert-informed descriptions provided as few-shot examples. Overall, it facilitates reliable decision-making while reducing dependence on large, annotated datasets and extensive computational resources. CLINICAL RELEVANCE: By enabling accurate and interpretable lesion detection from smartphone images, the proposed approach has strong potential to support early triage in low-resource and remote health care environments, contributing to improved oral cancer prevention and patient outcomes.