Human Superior Temporal Gyrus Organization of Spectrotemporal Modulation Tuning Derived from Speech Stimuli

The human superior temporal gyrus (STG) is critical for speech perception, yet the organization of spectrotemporal processing of speech within the STG is not well understood. Here, to characterize the spatial organization of spectrotemporal processing of speech across human STG, we use high-density cortical surface field potential recordings while participants listened to natural continuous speech. While synthetic broad-band stimuli did not yield sustained activation of the STG, spectrotemporal receptive fields could be reconstructed from vigorous responses to speech stimuli. We find that the human STG displays a robust anterior–posterior spatial distribution of spectrotemporal tuning in which the posterior STG is tuned for temporally fast varying speech sounds that have relatively constant energy across the frequency axis (low spectral modulation) while the anterior STG is tuned for temporally slow varying speech sounds that have a high degree of spectral variation across the frequency axis (high spectral modulation). This work illustrates organization of spectrotemporal processing in the human STG, and illuminates processing of ethologically relevant speech signals in a region of the brain specialized for speech perception.

Frequency selectivity of voxel-by-voxel functional connectivity in human auditory cortex

While functional connectivity in the human cortex has been increasingly studied, its relationship to cortical representation of sensory features has not been documented as much. We used functional magnetic resonance imaging to demonstrate that voxel-by-voxel intrinsic functional connectivity (FC) is selective to frequency preference of voxels in the human auditory cortex. Thus, FC was significantly higher for voxels with similar frequency tuning than for voxels with dissimilar tuning functions. Frequency-selective FC, measured via the correlation of residual hemodynamic activity, was not explained by generic FC that is dependent on spatial distance over the cortex. This pattern remained even when FC was computed using residual activity taken from resting epochs. Further analysis showed that voxels in the core fields in the right hemisphere have a higher frequency selectivity in within-area FC than their counterpart in the left hemisphere, or than in the noncore-fields in the same hemisphere. Frequency-selective FC is consistent with previous findings of topographically organized FC in the human visual and motor cortices. The high degree of frequency selectivity in the right core area is in line with findings and theoretical proposals regarding the asymmetry of human auditory cortex for spectral processing.

Parcellation of human and monkey auditory cortex with fMRI pattern classification

We propose a marker for core AC based directly on functional magnetic resonance imaging (fMRI) data and pattern classification. We show that a portion of AC in Heschl's gyrus classifies sound frequency more accurately than other regions in AC. Using fMRI data from macaques, we validate that the region where frequency classification performance is significantly above chance overlaps core auditory fields, predominantly A1. Within this region, we measure tonotopic gradients and estimate the locations of the human homologues of the core auditory subfields A1 and R.

Transient hearing loss in a critical period leads to altered auditory cortex

To examine whether the cellular properties could recover from HL, earplugs were removed prior to (P17) or after (P23), the closure of these CPs. The earlier age of hearing restoration led to greater recovery of cellular function, but firing rate remained disrupted. When earplugs were removed after the closure of these CPs, several changes persisted into adulthood.

Flexible information coding in auditory cortex during perception, imagery, and memory for sounds

Auditory imagery of the same sounds evokes similar overall activity in auditory cortex as perception. However, during imagery abstracted, categorical information is encoded in the neural patterns, particularly when individuals are experiencing more vivid imagery. This highlights the necessity to move beyond traditional “brain mapping” inference in human neuroimaging, which assumes common regional activation implies similar mental representations.

Rhythmic auditory cortex activity shapes stimulus–response gain and background firing

We found that phase-dependent models better reproduced the observed responses than static models, during both stimulation with a series of natural sounds and epochs of silence. This was attributable to two factors: (1) phase-dependent variations in background firing (most prominent for delta; 1–4 Hz); and (2) modulations of response gain that rhythmically amplify and attenuate the responses at specific phases of the rhythm (prominent for frequencies between 2 and 12 Hz).

Aging after noise exposure: Acceleration of cochlear synaptopathy in “recovered” ears

Noise exposure = bad.

The synaptopathic noise (100 dB) caused 35–50 dB threshold shifts at 24 h. By 2 weeks, thresholds had recovered, but synaptic counts and ABR amplitudes at high frequencies were reduced by up to ∼45%. As exposed animals aged, synaptopathy was exacerbated compared with controls and spread to lower frequencies. Proportional ganglion cell losses followed. Threshold shifts first appeared >1 year after exposure and, by ∼20 months, were up to 18 dB greater in the synaptopathic noise group. Outer hair cell losses were exacerbated in the same time frame (∼10% at 32 kHz).

Modulation-frequency-specific adaptation in awake auditory cortex ($)

By recording responses to pairs of sinusoidally amplitude modulated (SAM) tones in the auditory cortex of awake squirrel monkeys, we show that the prior presentation of the SAM masker elicited persistent and tuned suppression of the firing rate to subsequent SAM signals. Population averages of these effects are compatible with adaptation in broadly tuned modulation channels.