The full article is available here.
Former Obleserlab PhD student Leo Waschke is now out in eLife with an ingenious demonstration how both endogenous and exogenously-driven changes in the steepness of the brain-electric 1/f power spectrum (in part linked directly to local excitation:inhibiton, E:I, ratio) in neural populations can affect behaviour in complex, multi-sensory environments: “Modality-specific tracking of attention and sensory statistics in the human electrophysiological spectral exponent”.
The results draw heavily on the recent spectral-slope exponent work by our collaborators at University of California San Diego in the lab of Bradley Voytek, and have come together in a three-lab collabo of Lübeck, San Diego, and Leo’s current scientific home, the Douglas Garrett lab at the MPIB.
1/ Our preprint is published in @eLife! I think we kind of bury the lede here (relevant to the “activity silent” neural activity conversation especially) in that we see clear and strong spectral exponent effects for tracking stimulus statistics that are *invisible* in ERPs. https://t.co/G5KddvDZrW pic.twitter.com/YKZjLD9M3c— Brad Voytek (@bradleyvoytek) October 22, 2021
Here’s a brand new PhD training opportunity, @dfg_public-funded, joint project of @ObleserLab at @UniLuebeck Germany, supervised by me, with star collaborator @GesaHartwigsen (@MPI_CBS) — starting next spring. Please be in touch. Please distribute widely. https://t.co/oTUEVVgQSG pic.twitter.com/L4DtFaqRJl
— Jonas Obleser (@jonasobleser) October 19, 2021
Our lab (senior author Sarah Tune) teamed up once again with the Babylab Lübeck, led by Sarah Jessen: Sarah and Sarah co-wrote a great tutorial on how the versatile analysis framework of temporal response functions can be used to analyse brain data obtained in infants. The article has now been accepted for publication in the well-reputed journal Developmental Cognitive Neuroscience:
Very excited to announce that former Obleser lab PhD student Lea-Maria Schmitt with her co-authors *) is now out in the Journal Science Advances with her new work, fusing artifical neural networks and functional MRI data, on timescales of prediction in natural language comprehension:
*) Lea-Maria Schmitt, Julia Erb, Sarah Tune, and Jonas Obleser from the Obleser lab / Lübeck side, and our collaborators Anna Rysop and Gesa Hartwigsen from Gesa’s Lise Meitner group at the Max Planck Institute in Leipzig. This research was made possible by the ERC and the DFG.
Our lab is proud and happy that another major stepping stone from our ERC consolidator project (“AUDADAPT”) is now accepted for publication in PLoS Biology! Congratulations to our first author Dr Mohsen Alavash, now a senior researcher in the Obleser lab in his own right.
Whoop. “ Dear Dr Alavash,
I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Biology.” — w/ @sarahs_tunes @ObleserLab @PLOSBiology https://t.co/cw8AQpo9UE
— Jonas Obleser (@jonasobleser) September 16, 2021
Congratulations to former Obleser postdoc Jens Kreitewolf (now at McGill University) for his new paper in Cognition, “Familiarity and task context shape the use of acoustic information in voice identity perception”!
Together with our colleagues from London, Nadine Lavan and Carolyn McGettigan, we took a new approach to test the longstanding theoretical claim that listeners differ in their use of acoustic information when perceiving identity from familiar and unfamiliar voices. Unlike previous studies that have related single acoustic features to voice identity perception, we linked listeners’ voice-identity judgments to more complex acoustic representations—that is, the spectral similarity of voice recordings (see Figure below).
This new study has a direct link to pop culture (by captilazing on naturally-varying voice recordings taken from the famous TV show Breaking Bad) and challenges traditional proposals that view familiar and unfamiliar voice perception as being distinct at all times.
Click here to find out more.
Frauke Kraus, Sarah Tune, Anna Ruhe, Jonas Obleser & Malte Wöstmann demonstrate that unilateral acoustic degradation delays attentional separation of competing speech.
Unilateral cochlear implant (CI) users have to integrate acoustically intact speech on one ear and acoustically degraded speech on the other ear. How interact unilateral acoustic degradation and spatial attention in a multitalker situation?
N = 22 participants took part in a competing listening experiment while listening to an intact audiobook under distraction of an acoustically degraded audiobook and vice versa. Speech tracking revealed not per se reduced attentional separation of acoustically degraded speech but instead a delay in time compared to intact speech. These findings might explain listening challenges experienced by unilateral CI users.
To learn more, the paper is available here.