A Dynamic Neural Field Model of the McGurk Effect and Incongruous Audiovisual Speech Stimuli

AbstractOur Dynamic Neural Field (DNF) model aims to simulate audiovisual integration in speech perception, including the well-known McGurk effect (McGurk & MacDonald, 1976). The classic McGurk effect is characterized by a “fusion effect,” whereby incongruent audio and visual stimuli are fused into a single percept, however other interesting audiovisual effects are present in the extant literature. Our DNF model uses the same architecture and parameters across stimulus combinations to simulate a host of audiovisual illusory effects as well as audiovisually congruent, auditory-only, and visual-only controls. Our simulation results replicate rates of visual-dominant percepts, audiovisual fusion percepts, auditory-dominant percepts, and auditory dichotic fusion found in the extant literature, and illustrate how a complex pattern of responses across different stimuli configurations can arise from common neural dynamics involved in binding information across sensory modalities. We are currently exploring how hemodynamic response predictions generated through our neural simulations relate to real-time behavior.


Return to previous page