In this three-year project, we will use the auditory modality as a test case to investigate how the suppression of distracting information (i.e., “filtering”) is neurally implemented. While it is known that the attentional sampling of targets (a) is rhythmic, (b) can be entrained, and © is modulated by top-down predictions, the existence and neural implementation of these mechanisms for the suppression of distractors is at present unclear. To test this, we will use adaptations of established behavioural paradigms of distractor suppression and recordings of human electrophysiological signals in the Magento-/ Electroencephalogram (M/EEG).
Author: Jonas
Wöstmann, Schmitt and Obleser demonstrate that closing the eyes enhances the attentional modulation of neural alpha power but does not affect behavioural performance in two listening tasks
Does closing the eyes enhance our ability to listen attentively? In fact, many of us tend to close their eyes when listening conditions become challenging, for example on the phone. It is thus surprising that there is no published work on the behavioural or neural consequences of closing the eyes during attentive listening. In the present study, we demonstrate that eye closure does not only increase the overall level of absolute alpha power but also the degree to which auditory attention modulates alpha power over time in synchrony with attending to versus ignoring speech. However, our behavioural results provide evidence for the absence of any difference in listening performance with closed versus open eyes. The likely reason for this is that the impact of eye closure on neural oscillatory dynamics does not match alpha power modulations associated with listening performance precisely enough (see figure).
The paper is available as preprint here.
Deadline March 15, 2019!
Yet another Post-Doc slot to fill: COME WORK WITH US. I would love to hear from you. (English version to come) @obleserlab https://t.co/5Twiec7kLe
— Jonas Obleser (@jonasobleser) February 21, 2019
(Here is the job ad in English.)
Any informal inquiries: Please call or email Jonas.
***
Should you be interested in working with us more generally/supported by other funds/at a later stage, please be in touch as well.
Happy and enormously honoured to start my tenure as a @JNeuroscience reviewing editor! https://t.co/yMNOht4Py9
— Jonas Obleser (@jonasobleser) January 3, 2019
After three very interesting and instructive years as a handling editor for Neuroimage, I have just accepted an invitation to join my favourite journal, the classic Journal of Neuroscience, as what they call “Reviewing editor” (i.e., handling or action editor). Looking forward to some exciting science on our desks there!
The scientific publishing field is changing fast, and I am particularly happy for the opportunity to help foster a successful, society-run journal like The Journal of Neuroscience in the three upcoming years.
— Jonas
How brain areas communicate shapes human communication: The hearing regions in your brain form new alliances as you try to listen at the cocktail party
Obleserlab Postdocs Mohsen Alavash and Sarah Tune rock out an intricate graph-theoretical account of modular reconfigurations in challenging listening situations, and how these predict individuals’ listening success.
Available online now in PNAS! (Also, our uni is currently featuring a German-language press release on it, as well as an English-language version)
I am hiring: Come do a 4‑y Postdoc with us in Lübeck at the @ObleserLab! Modellers and Causal-inference-folks should feel especially targeted. We have ample data to play with (and a few undergrads to teach stats to now and then). https://t.co/jtH9HHR4uP Please RT wi(l)d(e)ly.
— Jonas Obleser (@jonasobleser) November 26, 2018
The official German-language job advert is also to be found here.
Get in touch with Jonas or the other lab members if vaguely interested.
In a new comparative fMRI study just published in Cerebral Cortex, AC postdoc Julia Erb and her collaborators in the Formisano (Maastricht University) and Vanduffel labs (KU Leuven) provide us with novel insights into speech evolution. These data by Erb et al. reveal homologies and differences in natural sound-encoding in human and non-human primate cortex.
From the Abstract: “Understanding homologies and differences in auditory cortical processing in human and nonhuman primates is an essential step in elucidating the neurobiology of speech and language. Using fMRI responses to natural sounds, we investigated the representation of multiple acoustic features in auditory cortex of awake macaques and humans. Comparative analyses revealed homologous large-scale topographies not only for frequency but also for temporal and spectral modulations. Conversely, we observed a striking interspecies difference in cortical sensitivity to temporal modulations: While decoding from macaque auditory cortex was most accurate at fast rates (> 30 Hz), humans had highest sensitivity to ~3 Hz, a relevant rate for speech analysis. These findings suggest that characteristic tuning of human auditory cortex to slow temporal modulations is unique and may have emerged as a critical step in the evolution of speech and language.”
The paper is available here. Congratulations, Julia!
Listening requires selective neural processing of the incoming sound mixture, which in humans is borne out by a surprisingly clean representation of attended-only speech in auditory cortex. How this neural selectivity is achieved even at negative signal-to-noise ratios (SNR) remains unclear. We show that, under such conditions, a late cortical representation (i.e., neural tracking) of the ignored acoustic signal is key to successful separation of attended and distracting talkers (i.e., neural selectivity). We recorded and modeled the electroencephalographic response of 18 participants who attended to one of two simultaneously presented stories, while the SNR between the two talkers varied dynamically between +6 and −6 dB. The neural tracking showed an increasing early-to-late attention-biased selectivity. Importantly, acoustically dominant (i.e., louder) ignored talkers were tracked neurally by late involvement of fronto-parietal regions, which contributed to enhanced neural selectivity. This neural selectivity, by way of representing the ignored talker, poses a mechanistic neural account of attention under real-life acoustic conditions.
The paper is available here.