Hearing what is being said: the distributed neural substrate for early speech interpretation

Lang Cogn Neurosci. 2024 Apr 28;39(9):1097-1116. doi: 10.1080/23273798.2024.2345308. eCollection 2024.

Abstract

Speech comprehension is remarkable for the immediacy with which the listener hears what is being said. Here, we focus on the neural underpinnings of this process in isolated spoken words. We analysed source-localised MEG data for nouns using Representational Similarity Analysis to probe the spatiotemporal coordinates of phonology, lexical form, and the semantics of emerging word candidates. Phonological model fit was detectable within 40-50 ms, engaging a bilateral network including superior and middle temporal cortex and extending into anterior temporal and inferior parietal regions. Lexical form emerged within 60-70 ms, and model fit to semantics from 100-110 ms. Strikingly, the majority of vertices in a central core showed model fit to all three dimensions, consistent with a distributed neural substrate for early speech analysis. The early interpretation of speech seems to be conducted in a unified integrative representational space, in conflict with conventional views of a linguistically stratified representational hierarchy.

Keywords: Cohort; MEG; RSA; speech.

Grants and funding

This research was funded by a European Research Council Advanced Investigator Grant to LKT under the European Community’s Horizon 2020 Research and Innovation Programme (2014–2022 ERC Grant Agreement 669820), and funded in whole, or in part, by the Wellcome Trust (grant number 211200/Z/18/Z) to AC.