Abstract
In the McGurk effect, perception of an auditory syllable changes dramatically when it is paired with an incongruent visual syllable, countering our intuition that speech perception is solely an auditory process. The dominant modeling framework for the study of audiovisual speech perception is that of Bayesian causal inference, but current Bayesian models are unable to predict the wide range of percepts evoked by McGurk syllables. We explored whether a deep neural network (DNN) known as AVHuBERT could provide an alternative modeling framework. AVHuBERT model variants were presented with McGurk syllables consisting of auditory "ba" paired with visual "ga" recorded from eight different talkers. AVHuBERT identified McGurk syllables as something other than "ba" at a rate of 59%, demonstrating a robust McGurk effect. The rate of the McGurk effect was similar to that observed in humans: 100 participants presented with the same McGurk syllables reported non-"ba" percepts on 56% of trials. AVHuBERT variants and humans produced a wide variety of responses to McGurk syllables, including the canonical McGurk fusion percept of "da," responses without any initial consonant such as "ah" and responses with other initial consonants such as "fa." The ability to predict percepts experienced by humans but not predicted by current Bayesian models suggest that DNNs and Bayesian models may provide complementary windows into the perceptual mechanisms underlying human audiovisual speech perception.