Characterizing the impaired development of multisensory speech integration: A hierarchical EEG framework
This project will investigate the neural basis of multisensory speech deficits in children with autism, and its recovery in adolescence. Building on previous behavioral work, this study will record multi-channel EEG from children and teenagers with and without a diagnosis of autism whilst they are presented with videos of an actress reciting children’s stories in audio, video and audiovisual format. Background noise will be presented at different levels to modulate the intelligibility of the different videos clips. Using a novel EEG analysis framework, we will probe multisensory speech processing at the level of acoustics, phonemes and words/meaning. We aim to establish where and when multisensory integration breaks down in the speech processing hierarchy.
Stimulus reconstruction (i.e., backward modeling) can be used to decode specific stimulus features from recorded neural response data in order to estimate how accurately this information was encoded in the brain. Temporal response function estimation (i.e., forward modeling) can be used in a similar manner to predict the neural response to a novel stimulus, but also allows for detailed examination of how the stimulus features were encoded in the brain and interpretation of the underlying neural generators.