Abstract
While generative AI becomes increasingly available in higher education, faculties find it challenging to design, implement, and evaluate AI-enabled personalized learning systems within accreditation-constrained professional curricula. This method paper describes ADAPT (Assessment-Driven AI for Personalized Tutoring), a home-grown AI tutoring and remediation ecosystem implemented in a required PharmD immunology course. Using standard learning management (Canvas) and assessment (ExamSoft) platforms, a 20-item quiz mapped to six immunology mastery domains (N = 34; mean 69.1%, SD 17.9; Cronbach's α = 0.73) was used to trigger tiered, structured generative AI remediation at both individual student and cohort levels. Instructional impact was evaluated using reliability indices, item-level difficulty analyses, and paired pre/post-assessment comparisons. Following AI-guided remediation, mean performance increased to 79.8% (+10.7 percentage points), variability decreased (SD 14.4), and assessment reliability improved (ExamSoft KR-20 0.87) compared with the diagnostic exam, the first midterm exam, and the final exam, respectively. Item difficulty stabilized (mean ≈ 0.80), with sustained retention of targeted concepts on the final examination. ADAPT provides a replicable, low-cost methodological blueprint for faculties to independently construct assessment-driven AI tutoring systems and lays the foundational steps for future AI-based predictive analysis workflow for at-risk students.