Abstract
INTRODUCTION: Understanding how students engage with AI-driven feedback remains understudied in educational psychology. With ChatGPT's emergence as a generative artificial intelligence tool, automated design improvement feedback (ADIF) has expanded significantly. This exploratory study investigates differential engagement patterns with ChatGPT-based ADIF across performance levels, grounded in self-regulated learning theory. METHODS: A mixed-method multiple case study examined 50 design students (25 high performers, 25 low performers) during a product design session. Data included behavioral observations of prompt strategies and query patterns, lag sequential analysis of cognitive transitions, and semi-structured interviews on emotional engagement. RESULTS: High performers employed diverse prompt strategies with iterative refinement, exhibited cyclical metacognitive transitions, and characterized interactions as exploratory and collaborative. Low performers used basic prompts with limited iterations, demonstrated linear query-to-implementation progressions, and described structured guidance-seeking interactions. DISCUSSION: The findings extend self-regulated learning theory to human-AI contexts, revealing how metacognitive capabilities shape behavioral, cognitive, and emotional engagement with AI feedback. Results demonstrate the need for scaffolding interventions to support lower-performing students in developing metacognitive strategies for effective AI interaction. This study contributes initial insights into performance-based variations in human-AI collaboration within educational contexts.