Abstract
BACKGROUND: Translating evidence-based therapies from "bench to bedside" remains challenging, and implementation science (IS) experts are crucial for this process. Qualitative analyses are essential, but require extensive time and cost for manual coding. Now, many turn to artificial intelligence (AI) to accelerate the pace of qualitative analysis, but significant questions remain about the quality, validity, and ethics of applying large language models like ChatGPT (OpenAI) to qualitative data. To this end, we have developed a method for AI-assisted rapid qualitative analysis that addresses these concerns. OBJECTIVE: This study aimed to develop AI-assisted rapid qualitative analysis for implementation science as an open-source encoder-based small language model (SLM) to aid IS experts. We focus on 2 efficient and high-performing SLMs: distilled bidirectional encoder representations from transformers (DistilBERT) and efficiently learning an encoder that classifies token replacements accurately (ELECTRA). The objective is to assess these models' accuracy in reproducing expert coding, their generalizability to new coding scenarios, and enhancing their accessibility for nontechnical experts through user-friendly tools. METHODS: Two previously coded IS datasets were used to train DistilBERT and ELECTRA models. These datasets were coded by IS experts using a mixed deductive and inductive approach, with initial categories derived from the domains of an IS framework: Practical, Robust Implementation, and Sustainability Model. We fine-tuned and evaluated DistilBERT and ELECTRA on these datasets, measuring performance by area under the precision-recall curve and Cohen κ. To facilitate use by nonprogrammers, we then developed an open-source Python package (pytranscripts) to streamline transcript processing, model classification, and evaluation. Additionally, a companion Streamlit web application allows users to upload interview transcripts and obtain automated coding and analytics without any coding expertise. RESULTS: Our findings demonstrate the success of leveraging SMLs to significantly accelerate qualitative analysis while maintaining high levels of accuracy and agreement with human annotators, although results are not universal and depend on how researchers approach qualitative coding. On the original dataset, DistilBERT achieved near-perfect agreement with human coders (Cohen κ=0.95), while ELECTRA showed substantial agreement (Cohen κ=0.71). However, both models' performance declined on the second, more ambiguous dataset, with DistilBERT's Cohen κ dropping to 0.48 and ELECTRA's to 0.39. Two primary drivers of performance drop appear to be related to the number of codes applied to the dataset, and whether coders apply multiple codes to each piece of data or constrain themselves to applying one. CONCLUSIONS: This work demonstrates that SLMs can meaningfully assist qualitative researchers with coding tasks as long as attention is paid to how experts code data that will train the SLM. This can be especially valuable in settings where deploying large language models is impractical or undesirable.