A deep neural network model of audiovisual speech recognition reports the McGurk effect

一种用于视听语音识别的深度神经网络模型报告了麦格克效应

阅读:2

Abstract

In the McGurk effect, perception of an auditory syllable changes dramatically when it is paired with an incongruent visual syllable, countering our intuition that speech perception is solely an auditory process. The dominant modeling framework for the study of audiovisual speech perception is that of Bayesian causal inference, but current Bayesian models are unable to predict the wide range of percepts evoked by McGurk syllables. We explored whether a deep neural network (DNN) known as AVHuBERT could provide an alternative modeling framework. AVHuBERT model variants were presented with McGurk syllables consisting of auditory "ba" paired with visual "ga" recorded from eight different talkers. AVHuBERT identified McGurk syllables as something other than "ba" at a rate of 59%, demonstrating a robust McGurk effect. The rate of the McGurk effect was similar to that observed in humans: 100 participants presented with the same McGurk syllables reported non-"ba" percepts on 56% of trials. AVHuBERT variants and humans produced a wide variety of responses to McGurk syllables, including the canonical McGurk fusion percept of "da," responses without any initial consonant such as "ah" and responses with other initial consonants such as "fa." The ability to predict percepts experienced by humans but not predicted by current Bayesian models suggest that DNNs and Bayesian models may provide complementary windows into the perceptual mechanisms underlying human audiovisual speech perception.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。