Left Superior temporal gyrus is coupled to attended speech in a cocktail-party auditory scene

Interesting low frequency (delta) results here.

Using a continuous listening task, we evaluated the coupling between the listener's cortical activity and the temporal envelopes of different sounds in a multitalker auditory scene using magnetoencephalography and corticovocal coherence analysis. Neuromagnetic signals were recorded from 20 right-handed healthy adult humans who listened to five different recorded stories (attended speech streams), one without any multitalker background (No noise) and four mixed with a “cocktail party” multitalker background noise at four signal-to-noise ratios (5, 0, −5, and −10 dB) to produce speech-in-noise mixtures, here referred to as Global scene. Coherence analysis revealed that the modulations of the attended speech stream, presented without multitalker background, were coupled at ∼0.5 Hz to the activity of both superior temporal gyri, whereas the modulations at 4–8 Hz were coupled to the activity of the right supratemporal auditory cortex. In cocktail party conditions, with the multitalker background noise, the coupling was at both frequencies stronger for the attended speech stream than for the unattended Multitalker background. The coupling strengths decreased as the Multitalker background increased. During the cocktail party conditions, the ∼0.5 Hz coupling became left-hemisphere dominant, compared with bilateral coupling without the multitalker background, whereas the 4–8 Hz coupling remained right-hemisphere lateralized in both conditions. The brain activity was not coupled to the multitalker background or to its individual talkers. The results highlight the key role of listener's left superior temporal gyri in extracting the slow ∼0.5 Hz modulations, likely reflecting the attended speech stream within a multitalker auditory scene.

Congruent visual speech enhances cortical entrainment to continuous auditory speech

Nice article that bears out predictions made by Peelle & Sommers (2015).

We demonstrate that the cortical representation of the speech envelope is enhanced by the presentation of congruent audiovisual speech in noise-free conditions. Furthermore, we show that this is likely attributable to the contribution of neural generators that are not particularly active during unimodal stimulation and that it is most prominent at the temporal scale corresponding to syllabic rate (2–6 Hz). Finally, our data suggest that neural entrainment to the speech envelope is inhibited when the auditory and visual streams are incongruent both temporally and contextually.

Oscillatory visual activity modulates spatial attention to rhythmic inputs

Using high-density electroencephalography in humans, a bilateral entrainment response to the rhythm (1.3 or 1.5 Hz) of an attended stimulation stream was observed, concurrent with a considerably weaker contralateral entrainment to a competing rhythm. That ipsilateral visual areas strongly entrained to the attended stimulus is notable because competitive inputs to these regions were being driven at an entirely different rhythm. Strong modulations of phase locking and weak modulations of single-trial power suggest that entrainment was primarily driven by phase-alignment of ongoing oscillatory activity.