Abstract
Facial expressions enable individuals to assess and understand emotions conveyed by others. Two crucial sources of expressive cues on the human face-the eyes and the mouth-capture attention and serve as reliable shortcuts for expression recognition. However, how the brain effectively extracts emotional information from these diagnostic features remains unknown. We investigated this issue using an electroencephalogram combined with a rapid serial visual presentation task in which participants were asked to recognize facial expressions (fear, happiness, and neutrality) from three formats (whole face, eye region, and mouth region). We found that participants recognized happy expressions from the mouth region more accurately than the other expressions, affirming the role of diagnostic features in facilitating bottom-up attentional capture. The isolated eye region with higher visual saliency induced the largest P1 component. Diagnostic features, such as a happy mouth and fearful eyes, elicited a larger N170 component compared to non-diagnostic features, such as a fearful mouth and happy eyes. Source analysis of N170 showed that the fusiform gyrus exhibited similar patterns in response to these emotional features. The P3 was effective in discriminating between different emotional content. When whole faces were visible, fearful and happy expressions were not distinguishable in the N170, while the P3 amplitude was larger when induced by fearful faces than by happy faces. Our study contributes to understanding how facial features play distinct roles in emotional perception, attention, and facial processing.