web analytics
Categories
Ageing Attention Auditory Neuroscience Auditory Perception Auditory Speech Processing EEG / MEG Executive Functions fMRI Grants Hearing Loss Linguistics Neural dynamics Perception Semantics Uncategorized

A grant dou­ble to celebrate

We are hon­oured and delight­ed that the Deutsche Forschungs­ge­mein­schaft has deemed two of our recent appli­ca­tions wor­thy of fund­ing: The two senior researchers in the  lab, Sarah Tune and Malte Wöst­mann, have both been award­ed three-year grant fund­ing for their new projects. Congratulations!

In her 3‑year, 360‑K€ project “How per­cep­tu­al infer­ence changes with age: Behav­iour­al and brain dynam­ics of speech per­cep­tion”, Sarah Tune will explore the role of per­cep­tu­al pri­ors in speech per­cep­tion in the age­ing lis­ten­er. She will main­ly use neur­al and per­cep­tu­al mod­el­ling and func­tion­al neuroimaging.

In his 3‑year, 270‑K€ project “Inves­ti­ga­tion of cap­ture and sup­pres­sion in audi­to­ry atten­tion”, Malte Wöst­mann will con­tin­ue and refine his suc­cess­ful research endeav­our into dis­so­ci­at­ing the role of sup­pres­sive mech­a­nisms in the lis­ten­ing mind and brain, main­ly using EEG and behav­iour­al modelling.

Both of them will soon adver­tise posts for PhD can­di­dates to join us, accord­ing­ly, and to work on these excit­ing projects with Sarah and Malte and the rest of the Oble­ser­lab team

 

Categories
Acoustics Neural Filters Neural Phase Papers Perception Publications Uncategorized

New paper in Devel­op­men­tal Cog­ni­tive Neu­ro­science, Jessen et al.

Our lab (senior author Sarah Tune) teamed up once again with the Baby­lab Lübeck, led by Sarah Jessen: Sarah and Sarah co-wrote a great tuto­r­i­al on how the ver­sa­tile analy­sis frame­work of tem­po­ral response func­tions can be used to analyse brain data obtained in infants. The arti­cle has now been accept­ed for pub­li­ca­tion in the well-reput­ed jour­nal Devel­op­men­tal Cog­ni­tive Neu­ro­science:

 

Categories
Auditory Neuroscience Auditory Speech Processing fMRI Linguistics Papers Perception Psychology Semantics Speech Uncategorized

New paper in Sci­ence Advances by Schmitt et al.

Very excit­ed to announce that for­mer Obleser lab PhD stu­dent Lea-Maria Schmitt with her co-authors *) is now out in the Jour­nal Sci­ence Advances with her new work, fus­ing artif­i­cal neur­al net­works and func­tion­al MRI data, on timescales of pre­dic­tion in nat­ur­al lan­guage comprehension:

Pre­dict­ing speech from a cor­ti­cal hier­ar­chy of event-based time scales”

*) Lea-Maria Schmitt, Julia Erb, Sarah Tune, and Jonas Obleser from the Obleser lab / Lübeck side, and our col­lab­o­ra­tors Anna Rysop and Gesa Hartwigsen from Gesa’s Lise Meit­ner group at the Max Planck Insti­tute in Leipzig. This research was made pos­si­ble by the ERC and the DFG.

 

Categories
Acoustics Familiarity Papers Perception Publications Voice

New Paper in Cog­ni­tion by Lavan, Kre­it­e­wolf et al.

Con­grat­u­la­tions to for­mer Obleser post­doc Jens Kre­it­e­wolf (now at McGill Uni­ver­si­ty) for his new paper in Cog­ni­tion, “Famil­iar­i­ty and task con­text shape the use of acoustic infor­ma­tion in voice iden­ti­ty perception”! 

Togeth­er with our col­leagues from Lon­don, Nadine Lavan and Car­olyn McGet­ti­gan, we took a new approach to test the long­stand­ing the­o­ret­i­cal claim that lis­ten­ers dif­fer in their use of acoustic infor­ma­tion when per­ceiv­ing iden­ti­ty from famil­iar and unfa­mil­iar voic­es. Unlike pre­vi­ous stud­ies that have relat­ed sin­gle acoustic fea­tures to voice iden­ti­ty per­cep­tion, we linked lis­ten­ers’ voice-iden­ti­ty judg­ments to more com­plex acoustic representations—that is, the spec­tral sim­i­lar­i­ty of voice  record­ings (see Fig­ure below).

This new study has a direct link to pop cul­ture (by cap­ti­laz­ing on nat­u­ral­ly-vary­ing voice record­ings tak­en from the famous TV show Break­ing Bad) and chal­lenges tra­di­tion­al pro­pos­als that view famil­iar and unfa­mil­iar voice per­cep­tion as being dis­tinct at all times.

Click here to find out more.

Categories
Auditory Neuroscience Auf deutsch Media Perception Publications Speech

Jonas as a guest on the Lan­guage Neu­ro­science Podcast

Thanks to col­league Stephen Wil­son from Van­der­bilt Uni­ver­si­ty for invit­ing me to this conversation!

The episode with Jonas is also avail­able on Spo­ti­fy.

— Ein deutsches automa­tisch erstelltes Tran­skript ist hier erhältlich (alle Über­set­zun­gen ohne Gewähr).

Categories
Ageing Auditory Cortex Auditory Neuroscience Auditory Perception fMRI Hearing Loss Papers Perception Psychology Publications

New paper in eLife: Erb et al., Tem­po­ral selec­tiv­i­ty declines in the aging human audi­to­ry cortex

Con­grat­u­la­tions to Oble­ser­lab post­doc Julia Erb for her new paper to appear in eLife, “Tem­po­ral selec­tiv­i­ty declines in the aging human audi­to­ry cor­tex”.

It’s a trope that old­er lis­ten­ers strug­gle more in com­pre­hend­ing speech (think of Pro­fes­sor Tour­nesol in the famous Tintin comics!). The neu­ro­bi­ol­o­gy of why and how age­ing and speech com­pre­hen­sion dif­fi­cul­ties are linked at all has proven much more elu­sive, however.

Part of this lack of knowl­edge is direct­ly root­ed in our lim­it­ed under­stand­ing of how the cen­tral parts of the hear­ing brain – audi­to­ry cor­tex, broad­ly speak­ing – are organized.

Does audi­to­ry cor­tex of old­er adults have dif­fer­ent tun­ing prop­er­ties? That is, do young and old­er adults dif­fer in the way their audi­to­ry sub­fields rep­re­sent cer­tain fea­tures of sound?

A spe­cif­ic hypoth­e­sis fol­low­ing from this, derived from what is known about age-relat­ed change in neu­ro­bi­o­log­i­cal and psy­cho­log­i­cal process­es in gen­er­al (the idea of so-called “ded­if­fer­en­ti­a­tion”), was that the tun­ing to cer­tain fea­tures would “broad­en” and thus lose selec­tiv­i­ty in old­er com­pared to younger listeners.

More mech­a­nis­ti­cal­ly, we aimed to not only observe so-called “cross-sec­tion­al” (i.e., age-group) dif­fer­ences, but to link a listener’s chrono­log­i­cal age as close­ly as pos­si­ble to changes in cor­ti­cal tuning.

Amongst old­er lis­ten­ers, we observe that tem­po­ral-rate selec­tiv­i­ty declines with high­er age. In line with senes­cent neur­al ded­if­fer­en­ti­a­tion more gen­er­al­ly, our results high­light decreased selec­tiv­i­ty to tem­po­ral infor­ma­tion as a hall­mark of the aging audi­to­ry cortex.

This research is gen­er­ous­ly sup­port­ed by the ERC Con­sol­ida­tor project AUDADAPT, and data for this study were acquired at the CBBM at Uni­ver­si­ty of Lübeck.

Categories
Auditory Neuroscience Auditory Perception EEG / MEG Papers Perception Uncategorized

New paper in press in elife: Waschke et al.

Oble­ser­lab senior PhD stu­dent Leo Waschke, along­side co-authors Sarah Tune and Jonas Obleser, has a new paper in eLife.

The pro­cess­ing of sen­so­ry infor­ma­tion from our envi­ron­ment is not con­stant but rather varies with changes in ongo­ing brain activ­i­ty, or brain states. Thus, also the acu­ity of per­cep­tu­al deci­sions depends on the brain state dur­ing which sen­so­ry infor­ma­tion is processed. Recent work in non-human ani­mals sug­gests two key process­es that shape brain states rel­e­vant for sen­so­ry pro­cess­ing and per­cep­tu­al per­for­mance. On the one hand, the momen­tary lev­el of neur­al desyn­chro­niza­tion in sen­so­ry cor­ti­cal areas has been shown to impact neur­al rep­re­sen­ta­tions of sen­so­ry input and relat­ed per­for­mance. On the oth­er hand, the cur­rent lev­el of arousal and relat­ed nora­dren­er­gic activ­i­ty has been linked to changes in sen­so­ry pro­cess­ing and per­cep­tu­al acuity.

How­ev­er, it is unclear at present, whether local neur­al desyn­chro­niza­tion and arousal pose dis­tinct brain states that entail vary­ing con­se­quences for sen­so­ry pro­cess­ing and behav­iour or if they rep­re­sent two inter­re­lat­ed man­i­fes­ta­tions of ongo­ing brain activ­i­ty and joint­ly affect behav­iour. Fur­ther­more, the exact shape of the rela­tion­ship between per­cep­tu­al per­for­mance and each of both brain states mark­ers (e.g. lin­ear vs. qua­drat­ic) is unclear at present.

In order to trans­fer find­ings from ani­mal phys­i­ol­o­gy to human cog­ni­tive neu­ro­science and test the exact shape of unique as well as shared influ­ences of local cor­ti­cal desyn­chro­niza­tion and glob­al arousal on sen­so­ry pro­cess­ing and per­cep­tu­al per­for­mance, we record­ed elec­troen­cephalog­ra­phy and pupil­lom­e­try in 25 human par­tic­i­pants while they per­formed a chal­leng­ing audi­to­ry dis­crim­i­na­tion task.

Impor­tant­ly, audi­to­ry stim­uli were selec­tive­ly pre­sent­ed dur­ing peri­ods of espe­cial­ly high or low audi­to­ry cor­ti­cal desyn­chro­niza­tion as approx­i­mat­ed by an infor­ma­tion the­o­ret­ic mea­sure of time-series com­plex­i­ty (weight­ed per­mu­ta­tion entropy). By means of a closed-loop real time set­up we were not only able to present stim­uli dur­ing dif­fer­ent desyn­chro­niza­tion states but also made sure to sam­ple the whole dis­tri­b­u­tion of such states, a pre­req­ui­site for the accu­rate assess­ment of brain-behav­iour rela­tion­ships. The record­ed pupil­lom­e­try data addi­tion­al­ly enabled us to draw infer­ences regard­ing the cur­rent lev­el of arousal due to the estab­lished link between nora­dren­er­gic activ­i­ty and pupil size.

 

Sin­gle tri­al analy­ses of EEG activ­i­ty, pupil­lom­e­try and behav­iour revealed clear­ly dis­so­cia­ble influ­ences of both brain state mark­ers on ongo­ing brain activ­i­ty, ear­ly sound-relat­ed activ­i­ty and behav­iour. High desyn­chro­niza­tion states were char­ac­ter­ized by a pro­nounced reduc­tion in oscil­la­to­ry pow­er across a wide fre­quen­cy range while high arousal states coin­cid­ed with a decrease in oscil­la­to­ry pow­er that was lim­it­ed to high fre­quen­cies. Sim­i­lar­ly, ear­ly sound-evoked activ­i­ty was dif­fer­en­tial­ly impact­ed by audi­to­ry cor­ti­cal desyn­chro­niza­tion and pupil-linked arousal. Phase-locked respons­es and evoked gam­ma pow­er increased with local desyn­chro­niza­tion with a ten­den­cy to sat­u­rate at inter­me­di­ate lev­els. Post-stim­u­lus low fre­quen­cy pow­er on the oth­er hand, increased with pupil-linked arousal.

Most impor­tant­ly, local desyn­chro­niza­tion and pupil-linked arousal dis­played dif­fer­ent rela­tion­ships with per­cep­tu­al per­for­mance. While par­tic­i­pants per­formed fastest and least biased fol­low­ing inter­me­di­ate lev­els of audi­to­ry cor­ti­cal desyn­chro­niza­tion, inter­me­di­ate lev­els of pupil-linked arousal were asso­ci­at­ed with high­est sen­si­tiv­i­ty. Thus, although both process­es pose behav­ioural­ly rel­e­vant brain states that affect per­cep­tu­al per­for­mance fol­low­ing an invert­ed u, they impact dis­tinct sub­do­mains of behav­iour. Tak­en togeth­er, our results speak to a mod­el in which inde­pen­dent states of local desyn­chro­niza­tion and glob­al arousal joint­ly shape states for opti­mal sen­so­ry pro­cess­ing and per­cep­tu­al per­for­mance. The pub­lished man­u­script includ­ing all sup­ple­men­tal infor­ma­tion can be found here.

Categories
Attention Auditory Cortex Auditory Neuroscience EEG / MEG Papers Perception Psychology Publications

New paper in Neu­roim­age by Fiedler et al.: Track­ing ignored speech matters

Lis­ten­ing requires selec­tive neur­al pro­cess­ing of the incom­ing sound mix­ture, which in humans is borne out by a sur­pris­ing­ly clean rep­re­sen­ta­tion of attend­ed-only speech in audi­to­ry cor­tex. How this neur­al selec­tiv­i­ty is achieved even at neg­a­tive sig­nal-to-noise ratios (SNR) remains unclear. We show that, under such con­di­tions, a late cor­ti­cal rep­re­sen­ta­tion (i.e., neur­al track­ing) of the ignored acoustic sig­nal is key to suc­cess­ful sep­a­ra­tion of attend­ed and dis­tract­ing talk­ers (i.e., neur­al selec­tiv­i­ty). We record­ed and mod­eled the elec­troen­cephalo­graph­ic response of 18 par­tic­i­pants who attend­ed to one of two simul­ta­ne­ous­ly pre­sent­ed sto­ries, while the SNR between the two talk­ers var­ied dynam­i­cal­ly between +6 and −6 dB. The neur­al track­ing showed an increas­ing ear­ly-to-late atten­tion-biased selec­tiv­i­ty. Impor­tant­ly, acousti­cal­ly dom­i­nant (i.e., loud­er) ignored talk­ers were tracked neu­ral­ly by late involve­ment of fron­to-pari­etal regions, which con­tributed to enhanced neur­al selec­tiv­i­ty. This neur­al selec­tiv­i­ty, by way of rep­re­sent­ing the ignored talk­er, pos­es a mech­a­nis­tic neur­al account of atten­tion under real-life acoustic conditions.

The paper is avail­able here.