Nonnative audiovisual speech perception in noise: dissociable effects of the speaker and listener

PLoS One. 2014 Dec 4;9(12):e114439. doi: 10.1371/journal.pone.0114439. eCollection 2014.

Abstract

Nonnative speech poses a challenge to speech perception, especially in challenging listening environments. Audiovisual (AV) cues are known to improve native speech perception in noise. The extent to which AV cues benefit nonnative speech perception in noise, however, is much less well-understood. Here, we examined native American English-speaking and native Korean-speaking listeners' perception of English sentences produced by a native American English speaker and a native Korean speaker across a range of signal-to-noise ratios (SNRs;-4 to -20 dB) in audio-only and audiovisual conditions. We employed psychometric function analyses to characterize the pattern of AV benefit across SNRs. For native English speech, the largest AV benefit occurred at intermediate SNR (i.e. -12 dB); but for nonnative English speech, the largest AV benefit occurred at a higher SNR (-4 dB). The psychometric function analyses demonstrated that the AV benefit patterns were different between native and nonnative English speech. The nativeness of the listener exerted negligible effects on the AV benefit across SNRs. However, the nonnative listeners' ability to gain AV benefit in native English speech was related to their proficiency in English. These findings suggest that the native language background of both the speaker and listener clearly modulate the optimal use of AV cues in speech recognition.

Publication types

  • Research Support, N.I.H., Extramural

MeSH terms

  • Acoustic Stimulation
  • Adolescent
  • Adult
  • Humans
  • Korea
  • Language
  • Male
  • Photic Stimulation
  • Signal-To-Noise Ratio
  • Speech Acoustics
  • Speech Intelligibility
  • Speech Perception*
  • United States
  • Young Adult