The full article is available here.
Category: Psychology
Very excited to announce that former Obleser lab PhD student Lea-Maria Schmitt with her co-authors *) is now out in the Journal Science Advances with her new work, fusing artifical neural networks and functional MRI data, on timescales of prediction in natural language comprehension:
“Predicting speech from a cortical hierarchy of event-based time scales”
*) Lea-Maria Schmitt, Julia Erb, Sarah Tune, and Jonas Obleser from the Obleser lab / Lübeck side, and our collaborators Anna Rysop and Gesa Hartwigsen from Gesa’s Lise Meitner group at the Max Planck Institute in Leipzig. This research was made possible by the ERC and the DFG.
Congratulations to Obleserlab postdoc Julia Erb for her new paper to appear in eLife, “Temporal selectivity declines in the aging human auditory cortex”.
It’s a trope that older listeners struggle more in comprehending speech (think of Professor Tournesol in the famous Tintin comics!). The neurobiology of why and how ageing and speech comprehension difficulties are linked at all has proven much more elusive, however.
Part of this lack of knowledge is directly rooted in our limited understanding of how the central parts of the hearing brain – auditory cortex, broadly speaking – are organized.
Does auditory cortex of older adults have different tuning properties? That is, do young and older adults differ in the way their auditory subfields represent certain features of sound?
A specific hypothesis following from this, derived from what is known about age-related change in neurobiological and psychological processes in general (the idea of so-called “dedifferentiation”), was that the tuning to certain features would “broaden” and thus lose selectivity in older compared to younger listeners.
More mechanistically, we aimed to not only observe so-called “cross-sectional” (i.e., age-group) differences, but to link a listener’s chronological age as closely as possible to changes in cortical tuning.
Amongst older listeners, we observe that temporal-rate selectivity declines with higher age. In line with senescent neural dedifferentiation more generally, our results highlight decreased selectivity to temporal information as a hallmark of the aging auditory cortex.
This research is generously supported by the ERC Consolidator project AUDADAPT, and data for this study were acquired at the CBBM at University of Lübeck.
Wöstmann, Alavash and Obleser demonstrate that alpha oscillations in the human brain implement distractor suppression independent of target selection.
In theory, the ability to selectively focus on relevant objects in our environment bases on selection of targets and suppression of distraction. As it is unclear whether target selection and distractor suppression are independent, we designed an Electroencephalography (EEG) study to directly contrast these two processes.
Participants performed a pitch discrimination task on a tone sequence presented at one loudspeaker location while a distracting tone sequence was presented at another location. When the distractor was fixed in the front, attention to upcoming targets on the left versus right side induced hemispheric lateralisation of alpha power with relatively higher power ipsi- versus contralateral to the side of attention.
Critically, when the target was fixed in front, suppression of upcoming distractors reversed the pattern of alpha lateralisation, that is, alpha power increased contralateral to the distractor and decreased ipsilaterally. Since the two lateralized alpha responses were uncorrelated across participants, they can be considered largely independent cognitive mechanisms.
This was further supported by the fact that alpha lateralisation in response to distractor suppression originated in more anterior, frontal cortical regions compared with target selection (see figure).
The paper is also available as preprint here.
In this three-year project, we will use the auditory modality as a test case to investigate how the suppression of distracting information (i.e., “filtering”) is neurally implemented. While it is known that the attentional sampling of targets (a) is rhythmic, (b) can be entrained, and © is modulated by top-down predictions, the existence and neural implementation of these mechanisms for the suppression of distractors is at present unclear. To test this, we will use adaptations of established behavioural paradigms of distractor suppression and recordings of human electrophysiological signals in the Magento-/ Electroencephalogram (M/EEG).
Wöstmann, Schmitt and Obleser demonstrate that closing the eyes enhances the attentional modulation of neural alpha power but does not affect behavioural performance in two listening tasks
Does closing the eyes enhance our ability to listen attentively? In fact, many of us tend to close their eyes when listening conditions become challenging, for example on the phone. It is thus surprising that there is no published work on the behavioural or neural consequences of closing the eyes during attentive listening. In the present study, we demonstrate that eye closure does not only increase the overall level of absolute alpha power but also the degree to which auditory attention modulates alpha power over time in synchrony with attending to versus ignoring speech. However, our behavioural results provide evidence for the absence of any difference in listening performance with closed versus open eyes. The likely reason for this is that the impact of eye closure on neural oscillatory dynamics does not match alpha power modulations associated with listening performance precisely enough (see figure).
The paper is available as preprint here.
How brain areas communicate shapes human communication: The hearing regions in your brain form new alliances as you try to listen at the cocktail party
Obleserlab Postdocs Mohsen Alavash and Sarah Tune rock out an intricate graph-theoretical account of modular reconfigurations in challenging listening situations, and how these predict individuals’ listening success.
Available online now in PNAS! (Also, our uni is currently featuring a German-language press release on it, as well as an English-language version)
Listening requires selective neural processing of the incoming sound mixture, which in humans is borne out by a surprisingly clean representation of attended-only speech in auditory cortex. How this neural selectivity is achieved even at negative signal-to-noise ratios (SNR) remains unclear. We show that, under such conditions, a late cortical representation (i.e., neural tracking) of the ignored acoustic signal is key to successful separation of attended and distracting talkers (i.e., neural selectivity). We recorded and modeled the electroencephalographic response of 18 participants who attended to one of two simultaneously presented stories, while the SNR between the two talkers varied dynamically between +6 and −6 dB. The neural tracking showed an increasing early-to-late attention-biased selectivity. Importantly, acoustically dominant (i.e., louder) ignored talkers were tracked neurally by late involvement of fronto-parietal regions, which contributed to enhanced neural selectivity. This neural selectivity, by way of representing the ignored talker, poses a mechanistic neural account of attention under real-life acoustic conditions.
The paper is available here.