Abstract
With the rapid expansion of the digital environment, content employing Artificial Intelligence (AI)-Generated Characters has surged, underscoring the importance of establishing emotional rapport with users. This study employed an eye-tracker and functional near-infrared spectroscopy (fNIRS) to examine differences in users' visual attention and prefrontal cognitive responses across age conditions and image type conditions, using objective physiological and behavioral metrics. This study recruited 24 healthy university students and presented 18 static facial images that combined six age conditions ranging from infant to elderly with three image type conditions comprising real, two-dimensional (2D), and three-dimensional (3D) representations. Visual attention was measured with an eye-tracker using total duration of fixations, average duration of fixations, number of fixations, and average pupil diameter, while prefrontal cognitive responses were recorded with fNIRS as changes in oxygenated hemoglobin (HbO) concentration. When participants viewed images in the 3D condition, the middle-aged condition showed a significantly greater average pupil diameter than the other age conditions (p = 0.0031). Similarly, when participants viewed middle-aged condition images, the 3D condition yielded a significantly greater average pupil diameter than the other image type conditions (p = 0.0215). In the fNIRS data, child stimuli exhibited higher HbO concentration than the other age conditions at Channel 5 in the right Dorsolateral Prefrontal Cortex (DLPFC; p = 0.003) and at Channel 19 in the left DLPFC (p = 0.038). Regarding image type conditions, Channel 35 in the left DLPFC showed higher HbO for 2D and 3D than for real (p = 0.006). Age conditions and image type conditions significantly affected visual attention and prefrontal cognitive responses. This suggests that visual stimuli influence not only simple preference but also users' emotional response and broader cognitive processing. Furthermore, this study's integrated approach, combining an eye-tracker and fNIRS, can provide practical evidence for user experience-based AI content and interface design.