Rapid Context-based Identification of Target Sounds in an Auditory Scene

基于上下文的快速识别听觉场景中的目标声音

阅读:1

Abstract

To make sense of our dynamic and complex auditory environment, we must be able to parse the sensory input into usable parts and pick out relevant sounds from all the potentially distracting auditory information. Although it is unclear exactly how we accomplish this difficult task, Gamble and Woldorff [Gamble, M. L., & Woldorff, M. G. The temporal cascade of neural processes underlying target detection and attentional processing during auditory search. Cerebral Cortex (New York, N.Y.: 1991), 2014] recently reported an ERP study of an auditory target-search task in a temporally and spatially distributed, rapidly presented, auditory scene. They reported an early, differential, bilateral activation (beginning at 60 msec) between feature-deviating target stimuli and physically equivalent feature-deviating nontargets, reflecting a rapid target detection process. This was followed shortly later (at 130 msec) by the lateralized N2ac ERP activation, that reflects the focusing of auditory spatial attention toward the target sound and parallels the attentional-shifting processes widely studied in vision. Here we directly examined the early, bilateral, target-selective effect to better understand its nature and functional role. Participants listened to midline-presented sounds that included target and nontarget stimuli that were randomly either embedded in a brief rapid stream or presented alone. The results indicate that this early bilateral effect results from a template for the target that utilizes its feature deviancy within a stream to enable rapid identification. Moreover, individual-differences analysis showed that the size of this effect was larger for participants with faster RTs. The findings support the hypothesis that our auditory attentional systems can implement and utilize a context-based relational template for a target sound, making use of additional auditory information in the environment when needing to rapidly detect a relevant sound.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。