web analytics
Categories
Ageing Attention Auditory Neuroscience Auditory Perception Auditory Speech Processing EEG / MEG Executive Functions fMRI Grants Hearing Loss Linguistics Neural dynamics Perception Semantics Uncategorized

A grant dou­ble to celebrate

We are hon­oured and delight­ed that the Deutsche Forschungs­ge­mein­schaft has deemed two of our recent appli­ca­tions wor­thy of fund­ing: The two senior researchers in the  lab, Sarah Tune and Malte Wöst­mann, have both been award­ed three-year grant fund­ing for their new projects. Congratulations!

In her 3‑year, 360‑K€ project “How per­cep­tu­al infer­ence changes with age: Behav­iour­al and brain dynam­ics of speech per­cep­tion”, Sarah Tune will explore the role of per­cep­tu­al pri­ors in speech per­cep­tion in the age­ing lis­ten­er. She will main­ly use neur­al and per­cep­tu­al mod­el­ling and func­tion­al neuroimaging.

In his 3‑year, 270‑K€ project “Inves­ti­ga­tion of cap­ture and sup­pres­sion in audi­to­ry atten­tion”, Malte Wöst­mann will con­tin­ue and refine his suc­cess­ful research endeav­our into dis­so­ci­at­ing the role of sup­pres­sive mech­a­nisms in the lis­ten­ing mind and brain, main­ly using EEG and behav­iour­al modelling.

Both of them will soon adver­tise posts for PhD can­di­dates to join us, accord­ing­ly, and to work on these excit­ing projects with Sarah and Malte and the rest of the Oble­ser­lab team

 

Categories
Auditory Perception Auditory Speech Processing Speech

Hot off the press: New chap­ter on neur­al oscil­la­tions in speech per­cep­tion by Tune & Obleser

Neur­al oscil­la­tions are a promi­nent fea­ture of the brain’s elec­tro­phys­i­ol­o­gy and tar­get vari­ables in many speech per­cep­tion stud­ies. For the lat­est edi­tion of the Springer Hand­book Audi­to­ry Research – this time focused on speech per­cep­tion – lab mem­bers Sarah Tune and Jonas Obleser teamed up to take stock of what has been learned about the func­tion­al rela­tion­ship of neur­al oscil­la­tions and speech perception.

By focus­ing on core func­tions and com­pu­ta­tion­al prin­ci­ples, the chap­ter offers a par­si­mo­nious account of the sta­ble pat­terns that have emerged across stud­ies and lev­els of investigations.

You can find a preprint of the chap­ter here and the entire col­lec­tion of chap­ters here.

Categories
Auditory Neuroscience Auditory Speech Processing fMRI Linguistics Papers Perception Psychology Semantics Speech Uncategorized

New paper in Sci­ence Advances by Schmitt et al.

Very excit­ed to announce that for­mer Obleser lab PhD stu­dent Lea-Maria Schmitt with her co-authors *) is now out in the Jour­nal Sci­ence Advances with her new work, fus­ing artif­i­cal neur­al net­works and func­tion­al MRI data, on timescales of pre­dic­tion in nat­ur­al lan­guage comprehension:

Pre­dict­ing speech from a cor­ti­cal hier­ar­chy of event-based time scales”

*) Lea-Maria Schmitt, Julia Erb, Sarah Tune, and Jonas Obleser from the Obleser lab / Lübeck side, and our col­lab­o­ra­tors Anna Rysop and Gesa Hartwigsen from Gesa’s Lise Meit­ner group at the Max Planck Insti­tute in Leipzig. This research was made pos­si­ble by the ERC and the DFG.

 

Categories
Adaptive Control Auditory Neuroscience Auditory Speech Processing Auf deutsch Events Executive Functions Hearing Loss Media Speech

Jonas pre­sent­ed for the KIND Hörs­tiftung in Berlin (Video)

Im Feb­ru­ar hat­te ich die Ehre, für die Kind Hörs­tiftung auf deren 2019er Sym­po­sium in Berlin unsere Arbeit­en zur Vorher­sage des Hör­erfol­gs exem­plar­isch anhand einiger unser­er Stu­di­en all­ge­mein­ver­ständlich zu beleucht­en. Ein 25-minütiges Video dieses Vor­trags ist jet­zt online.

(In Feb­ru­ary, I had the hon­our of pre­sent­ing some of our recent work on pre­dict­ing indi­vid­u­als’ lis­ten­ing suc­cess at the sym­po­sium of the Kind Hear­ing Foun­da­tion. A video in Ger­man is now available.)

Categories
Attention Auditory Cortex Auditory Speech Processing EEG / MEG Psychology Speech

AC post­doc Malte Wöst­mann scores DFG grant to study the tem­po­ral dynam­ics of the audi­to­ry atten­tion­al filter

In this three-year project, we will use the audi­to­ry modal­i­ty as a test case to inves­ti­gate how the sup­pres­sion of dis­tract­ing infor­ma­tion (i.e., “fil­ter­ing”) is neu­ral­ly imple­ment­ed. While it is known that the atten­tion­al sam­pling of tar­gets (a) is rhyth­mic, (b) can be entrained, and © is mod­u­lat­ed by top-down pre­dic­tions, the exis­tence and neur­al imple­men­ta­tion of these mech­a­nisms for the sup­pres­sion of dis­trac­tors is at present unclear. To test this, we will use adap­ta­tions of estab­lished behav­iour­al par­a­digms of dis­trac­tor sup­pres­sion and record­ings of human elec­tro­phys­i­o­log­i­cal sig­nals in the Magen­to-/ Elec­troen­cephalo­gram (M/EEG).

Abstract of research project:

Back­ground: Goal-direct­ed behav­iour in tem­po­ral­ly dynam­ic envi­ron­ments requires to focus on rel­e­vant infor­ma­tion and to not get dis­tract­ed by irrel­e­vant infor­ma­tion. To achieve this, two cog­ni­tive process­es are nec­es­sary: On the one hand, atten­tion­al sam­pling of tar­get stim­uli has been focus of exten­sive research. On the oth­er hand, it is less well known how the human neur­al sys­tem exploits tem­po­ral infor­ma­tion in the stim­u­lus to fil­ter out dis­trac­tion. In the present project, we use the audi­to­ry modal­i­ty as a test case to study the tem­po­ral dynam­ics of atten­tion­al fil­ter­ing and its neur­al implementation.

Approach and gen­er­al hypoth­e­sis: In three vari­ants of the “Irrel­e­vant-Sound Task” we will manip­u­late tem­po­ral aspects of audi­to­ry dis­trac­tors. Behav­iour­al recall of tar­get stim­uli despite dis­trac­tion and respons­es in the elec­troen­cephalo­gram (EEG) will reflect the integri­ty and neur­al imple­men­ta­tion of the atten­tion­al fil­ter. In line with pre­lim­i­nary research, our gen­er­al hypoth­e­sis is that atten­tion­al fil­ter­ing bases on sim­i­lar but sign-reversed mech­a­nisms as atten­tion­al sam­pling: For instance, while atten­tion to rhyth­mic stim­uli increas­es neur­al sen­si­tiv­i­ty at time points of expect­ed tar­get occur­rence, fil­ter­ing of dis­trac­tors should instead decrease neur­al sen­si­tiv­i­ty at the time of expect­ed distraction.

Work pro­gramme: In each one of three Work Pack­ages (WPs), we will take as a mod­el an estab­lished neur­al mech­a­nism of atten­tion­al sam­pling and test the exis­tence and neur­al imple­men­ta­tion of a sim­i­lar mech­a­nism for atten­tion­al fil­ter­ing. This way, we will inves­ti­gate whether atten­tion­al fil­ter­ing fol­lows an intrin­sic rhythm (WP1), whether rhyth­mic dis­trac­tors can entrain atten­tion­al fil­ter­ing (WP2), and whether fore­knowl­edge about the time of dis­trac­tion induces top-down tun­ing of the atten­tion­al fil­ter in frontal cor­tex regions (WP3).

Objec­tives and rel­e­vance: The pri­ma­ry objec­tive of this research is to con­tribute to the foun­da­tion­al sci­ence on human selec­tive atten­tion, which requires a com­pre­hen­sive under­stand­ing of how the neur­al sys­tem achieves the task of fil­ter­ing out dis­trac­tion. Fur­ther­more, hear­ing dif­fi­cul­ties often base on dis­trac­tion by salient but irrel­e­vant sound. Results of this research will trans­late to the devel­op­ment of hear­ing aids that take into account neu­ro-cog­ni­tive mech­a­nisms to fil­ter out dis­trac­tion more efficiently.

Categories
Attention Auditory Cortex Auditory Speech Processing Papers Psychology Publications Speech

New paper in press in the Jour­nal of Cog­ni­tive Neuroscience

Wöst­mann, Schmitt and Obleser demon­strate that clos­ing the eyes enhances the atten­tion­al mod­u­la­tion of neur­al alpha pow­er but does not affect behav­iour­al per­for­mance in two lis­ten­ing tasks

Does clos­ing the eyes enhance our abil­i­ty to lis­ten atten­tive­ly? In fact, many of us tend to close their eyes when lis­ten­ing con­di­tions become chal­leng­ing, for exam­ple on the phone. It is thus sur­pris­ing that there is no pub­lished work on the behav­iour­al or neur­al con­se­quences of clos­ing the eyes dur­ing atten­tive lis­ten­ing. In the present study, we demon­strate that eye clo­sure does not only increase the over­all lev­el of absolute alpha pow­er but also the degree to which audi­to­ry atten­tion mod­u­lates alpha pow­er over time in syn­chrony with attend­ing to ver­sus ignor­ing speech. How­ev­er, our behav­iour­al results pro­vide evi­dence for the absence of any dif­fer­ence in lis­ten­ing per­for­mance with closed ver­sus open eyes. The like­ly rea­son for this is that the impact of eye clo­sure on neur­al oscil­la­to­ry dynam­ics does not match alpha pow­er mod­u­la­tions asso­ci­at­ed with lis­ten­ing per­for­mance pre­cise­ly enough (see figure).

The paper is avail­able as preprint here.

 

Categories
Adaptive Control Ageing Attention Auditory Cortex Auditory Neuroscience Auditory Speech Processing Executive Functions fMRI Papers Psychology Uncategorized

New paper in PNAS by Alavash, Tune, Obleser

How brain areas com­mu­ni­cate shapes human com­mu­ni­ca­tion: The hear­ing regions in your brain form new alliances as you try to lis­ten at the cock­tail party

Oble­ser­lab Post­docs Mohsen Alavash and Sarah Tune rock out an intri­cate graph-the­o­ret­i­cal account of mod­u­lar recon­fig­u­ra­tions in chal­leng­ing lis­ten­ing sit­u­a­tions, and how these pre­dict indi­vid­u­als’ lis­ten­ing success.

Avail­able online now in PNAS! (Also, our uni is cur­rent­ly fea­tur­ing a Ger­man-lan­guage press release on it, as well as an Eng­lish-lan­guage ver­sion)

Categories
Auditory Cortex Auditory Perception Auditory Speech Processing Hearing Loss Papers Perception Publications Speech

New paper in Ear and Hear­ing: Erb, Lud­wig, Kunke, Fuchs & Obleser on speech com­pre­hen­sion with a cochlear implant

We are excit­ed to share the results from our col­lab­o­ra­tion with the Cochlea Implant Cen­ter Leipzig: AC post­doc Julia Erb’s new paper on how 4‑Hz mod­u­la­tion sen­si­tiv­i­ty can inform us on 6‑month speech com­pre­hen­sion out­come in cochlear implants.

Erb J, Lud­wig AA, Kunke D, Fuchs M, & Obleser J (2018). Tem­po­ral sen­si­tiv­i­ty mea­sured short­ly after cochlear implan­ta­tion pre­dicts six-month speech recog­ni­tion outcome

Now avail­able online:

https://insights.ovid.com/crossref?an=00003446–900000000-98942

Abstract:

Objec­tives:

Psy­choa­coustic tests assessed short­ly after cochlear implan­ta­tion are use­ful pre­dic­tors of the reha­bil­i­ta­tive speech out­come. While large­ly inde­pen­dent, both spec­tral and tem­po­ral res­o­lu­tion tests are impor­tant to pro­vide an accu­rate pre­dic­tion of speech recog­ni­tion. How­ev­er, rapid tests of tem­po­ral sen­si­tiv­i­ty are cur­rent­ly lack­ing. Here, we pro­pose a sim­ple ampli­tude mod­u­la­tion rate dis­crim­i­na­tion (AMRD) par­a­digm that is val­i­dat­ed by pre­dict­ing future speech recog­ni­tion in adult cochlear implant (CI) patients.

Design:

In 34 new­ly implant­ed patients, we used an adap­tive AMRD par­a­digm, where broad­band noise was mod­u­lat­ed at the speech-rel­e­vant rate of ~4 Hz. In a lon­gi­tu­di­nal study, speech recog­ni­tion in qui­et was assessed using the closed-set Freiburg­er num­ber test short­ly after cochlear implan­ta­tion (t0) as well as the open-set Freiburg­er mono­syl­lab­ic word test 6 months lat­er (t6).

Results:

Both AMRD thresh­olds at t0 (r = –0.51) and speech recog­ni­tion scores at t0 (r = 0.56) pre­dict­ed speech recog­ni­tion scores at t6. How­ev­er, AMRD and speech recog­ni­tion at t0 were uncor­re­lat­ed, sug­gest­ing that those mea­sures cap­ture par­tial­ly dis­tinct per­cep­tu­al abil­i­ties. A mul­ti­ple regres­sion mod­el pre­dict­ing 6‑month speech recog­ni­tion out­come with deaf­ness dura­tion and speech recog­ni­tion at t0 improved from adjust­ed R2 = 0.30 to adjust­ed R2 = 0.44 when AMRD thresh­old was added as a predictor.

Con­clu­sions:

These find­ings iden­ti­fy AMRD thresh­olds as a reli­able, nonre­dun­dant pre­dic­tor above and beyond estab­lished speech tests for CI out­come. This AMRD test could poten­tial­ly be devel­oped into a rapid clin­i­cal tem­po­ral-res­o­lu­tion test to be inte­grat­ed into the post­op­er­a­tive test bat­tery to improve the reli­a­bil­i­ty of speech out­come prognosis.