Slow neural oscillations (∼ 1–15 Hz) are thought to orchestrate the neural processes of spoken language comprehension. However, functional subdivisions within this broad range of frequencies are disputed, with most studies hypothesizing only about single frequency bands. The present study utilizes an established paradigm of spoken word recognition (lexical decision) to test the hypothesis that within the slow neural oscillatory frequency range, distinct functional signatures and cortical networks can be identified at least for theta- (∼ 3–7 Hz) and alpha-frequencies (∼ 8–12 Hz). Listeners performed an auditory lexical decision task on a set of items that formed a word–pseudoword continuum: ranging from (1) real words over (2) ambiguous pseudowords (deviating from real words only in one vowel; comparable to natural mispronunciations in speech) to (3) pseudowords (clearly deviating from real words by randomized syllables). By means of time–frequency analysis and spatial filtering, we observed a dissociation into distinct but simultaneous patterns of alpha power suppression and theta power enhancement. Alpha exhibited a parametric suppression as items increasingly matched real words,in line with lowered functional inhibition in a left-dominant lexical processing network for more word-like input. Simultaneously, theta power in a bilateral fronto-temporal network was selectively enhanced for ambiguous pseudowords only. Thus, enhanced alpha power can neurally “gate” lexical integration, while enhanced theta power might index functionally more specific ambiguity-resolution processes. To this end, a joint analysis of both frequency bands provides neural evidence for parallel processes in achieving spoken word recognition.
References
Strauβ A1, Kotz SA2, Scharinger M3, Obleser J3. Alpha and theta brain oscillations index dissociable processes in spoken word recognition. Neuroimage. 2014 Apr 18. PMID: 24747736. [Open with Read]
Slow neural oscillations (~1–15Hz) are thought to orchestrate the neural processes of spoken language comprehension. However, functional subdivisions within this broad range of frequencies are dispute […]
Enhanced alpha power compared with a baseline can reflect states of increased cognitive load, for example, when listening to speech in noise. Can knowledge about “when” to listen (temporal expectations) potentially counteract cognitive load and concomitantly reduce alpha? The current magnetoencephalography (MEG) experiment induced cognitive load using an auditory delayed-matching-to-sample task with 2 syllables S1 and S2 presented in speech-shaped noise. Temporal expectation about the occurrence of S1 was manipulated in 3 different cue conditions: “Neutral” (uninformative about foreperiod), “early-cued” (short foreperiod), and “late-cued” (long foreperiod). Alpha power throughout the trial was highest when the cue was uninformative about the onset time of S1 (neutral) and lowest for the late-cued condition. This alpha-reducing effect of late compared with neutral cues was most evident during memory retention in noise and originated primarily in the right insula. Moreover, individual alpha effects during retention accounted best for observed individual performance differences between late-cued and neutral conditions, indicating a tradeoff between allocation of neural resources and the benefits drawn from temporal cues. Overall, the results indicate that temporal expectations can facilitate the encoding of speech in noise, and concomitantly reduce neural markers of cognitive load.
References
Wilsch A, Henry MJ, Herrmann B, Maess B, Obleser J. Alpha Oscillatory Dynamics Index Temporal Expectation Benefits in Working Memory. Cereb Cortex. 2014 Jan 31. PMID: 24488943. [Open with Read]
Enhanced alpha power compared with a baseline can reflect states of increased cognitive load, for example, when listening to speech in noise. Can knowledge about “when” to listen (temporal expectation […]
The paper is now available online free of charge, and—funnily enough—appeared right on January 1, 2014.
References
Herrmann B, Schlichting N, Obleser J. Dynamic range adaptation to spectral stimulus statistics in human auditory cortex. J Neurosci. 2014 Jan 1;34(1):327–31. PMID: 24381293. [Open with Read]
Classically, neural adaptation refers to a reduction in response magnitude by sustained stimulation. In human electroencephalography (EEG), neural adaptation has been measured, for example, as frequen […]
Erb J, Obleser J. Upregulation of cognitive control networks in older adults’ speech comprehension. Front Syst Neurosci. 2013 Dec 24;7:116. PMID: 24399939. [Open with Read]
Speech comprehension abilities decline with age and with age-related hearing loss, but it is unclear how this decline expresses in terms of central neural mechanisms. The current study examined neural […]
Watch this space and the PLOSONE website for a forthcoming article by Molly Henry and me;
Dissociable neural response signatures for slow amplitude and frequency modulation in human auditory cortex
Harking back at what we had argued initially in our 2012 Frontiers op’ed piece (together with Björn Herrmann), Molly presents neat evidence for dissociable cortical signatures of slow amplitude versus frequency modulation. These cortical signatures potentially provide an efficient means to dissect simultaneously communicated slow temporal and spectral information in acoustic communication signals.
Henry MJ, Obleser J. Dissociable neural response signatures for slow amplitude and frequency modulation in human auditory cortex. PLoS One. 2013 Oct 29;8(10):e78758. PMID: 24205309. [Open with Read]
Natural auditory stimuli are characterized by slow fluctuations in amplitude and frequency. However, the degree to which the neural responses to slow amplitude modulation (AM) and frequency modulation […]
Thalamic and parietal brain morphology predicts auditory category learning
Categorizing sounds is vital for adaptive human behavior. Accordingly, changing listening situations (external noise, but also peripheral hearing loss in aging) require listeners to flexibly adjust their categorization strategies, e.g., switch amongst available acoustic cues. However, listeners differ considerably in these adaptive capabilities. For this reason, we employed voxel-based morphometry (VBM) in our study (Neuropsychologia, In press), in order to assess the degree to which individual brain morphology is predictive of such adaptive listening behavior.
Oscillatory Phase Dynamics in Neural Entrainment Underpin Illusory Percepts of Time
Natural sounds like speech and music inherently vary in tempo over time. Yet, contextual factors such as variations in the sound’s loudness or pitch influence perception of temporal rate change towards slowing down or speeding up.
A new MEG study by Björn Herrmann, Molly Henry, Maren Grigutsch and Jonas Obleser asked for the neural oscillatory dynamics that underpin context-induced illusions in temporal rate change and found illusory percepts to be linked to changes in the neural phase patterns of entrained oscillations while the exact frequency of the oscillatory response was related to veridical percepts.
The paper is in press and forthcoming in The Journal of Neuroscience.
Herrmann B, Henry MJ, Grigutsch M, Obleser J. Oscillatory phase dynamics in neural entrainment underpin illusory percepts of time. J Neurosci. 2013 Oct 2;33(40):15799–809. PMID: 24089487. [Open with Read]
Neural oscillatory dynamics are a candidate mechanism to steer perception of time and temporal rate change. While oscillator models of time perception are strongly supported by behavioral evidence, a […]
When we listen to sounds like speech and music, we have to make sense of different acoustic features that vary simultaneously along multiple time scales. This means that we, as listeners, have to selectively attend to, but at the same time selectively ignore, separate but intertwined features of a stimulus.
Brain regions associated with selective attending to and selective ignoring of temporal stimulus features. (more)
A newly published fMRI study by Molly Henry, Björn Herrmann, and Jonas Obleser found a network of brain regions that responded oppositely to identical stimulus characteristics depending on whether they were relevant or irrelevant, even when both stimulus features involved attention to time and temporal features.
Henry MJ, Herrmann B, Obleser J. Selective Attention to Temporal Features on Nested Time Scales. Cereb Cortex. 2013 Aug 26. PMID: 23978652. [Open with Read]
Meaningful auditory stimuli such as speech and music often vary simultaneously along multiple time scales. Thus, listeners must selectively attend to, and selectively ignore, separate but intertwined […]