Abstract
OBJECTIVES: Large language models (LLMs) face challenges in inductive thematic analysis, a task requiring deep interpretive, domain-specific expertise. We evaluated the feasibility of using LLMs to replicate expert-driven thematic analysis of social media data. MATERIALS AND METHODS: Using 2 temporally nonintersecting Reddit datasets on xylazine (n = 286 and 686, for model optimization and validation, respectively) with 12 expert-derived themes, we evaluated 5 LLMs against expert coding. We modeled the task as a series of binary classifications, rather than a single, multilabel classification, employing zero-, single-, and few-shot prompting strategies and measuring performance via accuracy, precision, recall, and F(1) score. RESULTS: On the validation set, GPT-4o with 2-shot prompting performed best (accuracy: 90.9%; F(1) score: 0.71). For high-prevalence themes, model-derived thematic distributions closely mirrored expert classifications (eg, xylazine: 13.6% vs 17.8%; medications for opioid use disorders: 16.5% vs 17.8%). CONCLUSION: Our findings suggest that few-shot LLM-based approaches can automate thematic analyses, offering a scalable supplement for qualitative research.