Categories
Uncategorized

New lab mem­ber Sebas­t­ian Puschmann

We wel­come Dr. Sebas­t­ian Puschmann as a new post­doc in the Obleser/Auditory Cog­ni­tion lab!

Sebas­t­ian has a back­ground in Physics. He received his train­ing in audi­to­ry cog­ni­tive neu­ro­science at the Uni­ver­si­ty of Old­en­burg and the Mon­tre­al Neu­ro­log­i­cal Insti­tute. In Lübeck, Sebas­t­ian will push for­ward stud­ies on the neur­al mechan­ics and neur­al changes in hear­ing loss.

 

 

 

Categories
Attention Auditory Cortex Auditory Speech Processing EEG / MEG Psychology Speech

AC post­doc Malte Wöst­mann scores DFG grant to study the tem­po­ral dynam­ics of the audi­to­ry atten­tion­al filter

In this three-year project, we will use the audi­to­ry modal­i­ty as a test case to inves­ti­gate how the sup­pres­sion of dis­tract­ing infor­ma­tion (i.e., “fil­ter­ing”) is neu­ral­ly imple­ment­ed. While it is known that the atten­tion­al sam­pling of tar­gets (a) is rhyth­mic, (b) can be entrained, and © is mod­u­lat­ed by top-down pre­dic­tions, the exis­tence and neur­al imple­men­ta­tion of these mech­a­nisms for the sup­pres­sion of dis­trac­tors is at present unclear. To test this, we will use adap­ta­tions of estab­lished behav­iour­al par­a­digms of dis­trac­tor sup­pres­sion and record­ings of human elec­tro­phys­i­o­log­i­cal sig­nals in the Magen­to-/ Elec­troen­cephalo­gram (M/EEG).

Abstract of research project:

Back­ground: Goal-direct­ed behav­iour in tem­po­ral­ly dynam­ic envi­ron­ments requires to focus on rel­e­vant infor­ma­tion and to not get dis­tract­ed by irrel­e­vant infor­ma­tion. To achieve this, two cog­ni­tive process­es are nec­es­sary: On the one hand, atten­tion­al sam­pling of tar­get stim­uli has been focus of exten­sive research. On the oth­er hand, it is less well known how the human neur­al sys­tem exploits tem­po­ral infor­ma­tion in the stim­u­lus to fil­ter out dis­trac­tion. In the present project, we use the audi­to­ry modal­i­ty as a test case to study the tem­po­ral dynam­ics of atten­tion­al fil­ter­ing and its neur­al implementation.

Approach and gen­er­al hypoth­e­sis: In three vari­ants of the “Irrel­e­vant-Sound Task” we will manip­u­late tem­po­ral aspects of audi­to­ry dis­trac­tors. Behav­iour­al recall of tar­get stim­uli despite dis­trac­tion and respons­es in the elec­troen­cephalo­gram (EEG) will reflect the integri­ty and neur­al imple­men­ta­tion of the atten­tion­al fil­ter. In line with pre­lim­i­nary research, our gen­er­al hypoth­e­sis is that atten­tion­al fil­ter­ing bases on sim­i­lar but sign-reversed mech­a­nisms as atten­tion­al sam­pling: For instance, while atten­tion to rhyth­mic stim­uli increas­es neur­al sen­si­tiv­i­ty at time points of expect­ed tar­get occur­rence, fil­ter­ing of dis­trac­tors should instead decrease neur­al sen­si­tiv­i­ty at the time of expect­ed distraction.

Work pro­gramme: In each one of three Work Pack­ages (WPs), we will take as a mod­el an estab­lished neur­al mech­a­nism of atten­tion­al sam­pling and test the exis­tence and neur­al imple­men­ta­tion of a sim­i­lar mech­a­nism for atten­tion­al fil­ter­ing. This way, we will inves­ti­gate whether atten­tion­al fil­ter­ing fol­lows an intrin­sic rhythm (WP1), whether rhyth­mic dis­trac­tors can entrain atten­tion­al fil­ter­ing (WP2), and whether fore­knowl­edge about the time of dis­trac­tion induces top-down tun­ing of the atten­tion­al fil­ter in frontal cor­tex regions (WP3).

Objec­tives and rel­e­vance: The pri­ma­ry objec­tive of this research is to con­tribute to the foun­da­tion­al sci­ence on human selec­tive atten­tion, which requires a com­pre­hen­sive under­stand­ing of how the neur­al sys­tem achieves the task of fil­ter­ing out dis­trac­tion. Fur­ther­more, hear­ing dif­fi­cul­ties often base on dis­trac­tion by salient but irrel­e­vant sound. Results of this research will trans­late to the devel­op­ment of hear­ing aids that take into account neu­ro-cog­ni­tive mech­a­nisms to fil­ter out dis­trac­tion more efficiently.

Categories
Attention Auditory Cortex Auditory Speech Processing Papers Psychology Publications Speech

New paper in press in the Jour­nal of Cog­ni­tive Neuroscience

Wöst­mann, Schmitt and Obleser demon­strate that clos­ing the eyes enhances the atten­tion­al mod­u­la­tion of neur­al alpha pow­er but does not affect behav­iour­al per­for­mance in two lis­ten­ing tasks

Does clos­ing the eyes enhance our abil­i­ty to lis­ten atten­tive­ly? In fact, many of us tend to close their eyes when lis­ten­ing con­di­tions become chal­leng­ing, for exam­ple on the phone. It is thus sur­pris­ing that there is no pub­lished work on the behav­iour­al or neur­al con­se­quences of clos­ing the eyes dur­ing atten­tive lis­ten­ing. In the present study, we demon­strate that eye clo­sure does not only increase the over­all lev­el of absolute alpha pow­er but also the degree to which audi­to­ry atten­tion mod­u­lates alpha pow­er over time in syn­chrony with attend­ing to ver­sus ignor­ing speech. How­ev­er, our behav­iour­al results pro­vide evi­dence for the absence of any dif­fer­ence in lis­ten­ing per­for­mance with closed ver­sus open eyes. The like­ly rea­son for this is that the impact of eye clo­sure on neur­al oscil­la­to­ry dynam­ics does not match alpha pow­er mod­u­la­tions asso­ci­at­ed with lis­ten­ing per­for­mance pre­cise­ly enough (see figure).

The paper is avail­able as preprint here.

 

Categories
Auditory Cortex Auditory Neuroscience fMRI Papers Publications

New paper by Erb et al. in Cere­bral Cor­tex: Human but not mon­key audi­to­ry cor­tex is tuned to slow tem­po­ral rates

In a new com­par­a­tive fMRI study just pub­lished in Cere­bral Cor­tex, AC post­doc Julia Erb and her col­lab­o­ra­tors in the Formisano (Maas­tricht Uni­ver­si­ty) and Van­duf­fel labs (KU Leu­ven) pro­vide us with nov­el insights into speech evo­lu­tion. These data by Erb et al. reveal homolo­gies and dif­fer­ences in nat­ur­al sound-encod­ing in human and non-human pri­mate cortex.

From the Abstract: “Under­stand­ing homolo­gies and dif­fer­ences in audi­to­ry cor­ti­cal pro­cess­ing in human and non­hu­man pri­mates is an essen­tial step in elu­ci­dat­ing the neu­ro­bi­ol­o­gy of speech and lan­guage. Using fMRI respons­es to nat­ur­al sounds, we inves­ti­gat­ed the rep­re­sen­ta­tion of mul­ti­ple acoustic fea­tures in audi­to­ry cor­tex of awake macaques and humans. Com­par­a­tive analy­ses revealed homol­o­gous large-scale topogra­phies not only for fre­quen­cy but also for tem­po­ral and spec­tral mod­u­la­tions. Con­verse­ly, we observed a strik­ing inter­species dif­fer­ence in cor­ti­cal sen­si­tiv­i­ty to tem­po­ral mod­u­la­tions: While decod­ing from macaque audi­to­ry cor­tex was most accu­rate at fast rates (> 30 Hz), humans had high­est sen­si­tiv­i­ty to ~3 Hz, a rel­e­vant rate for speech analy­sis. These find­ings sug­gest that char­ac­ter­is­tic tun­ing of human audi­to­ry cor­tex to slow tem­po­ral mod­u­la­tions is unique and may have emerged as a crit­i­cal step in the evo­lu­tion of speech and language.”

The paper is avail­able here. Con­grat­u­la­tions, Julia!

Categories
Attention Auditory Cortex Auditory Neuroscience EEG / MEG Papers Perception Psychology Publications

New paper in Neu­roim­age by Fiedler et al.: Track­ing ignored speech matters

Lis­ten­ing requires selec­tive neur­al pro­cess­ing of the incom­ing sound mix­ture, which in humans is borne out by a sur­pris­ing­ly clean rep­re­sen­ta­tion of attend­ed-only speech in audi­to­ry cor­tex. How this neur­al selec­tiv­i­ty is achieved even at neg­a­tive sig­nal-to-noise ratios (SNR) remains unclear. We show that, under such con­di­tions, a late cor­ti­cal rep­re­sen­ta­tion (i.e., neur­al track­ing) of the ignored acoustic sig­nal is key to suc­cess­ful sep­a­ra­tion of attend­ed and dis­tract­ing talk­ers (i.e., neur­al selec­tiv­i­ty). We record­ed and mod­eled the elec­troen­cephalo­graph­ic response of 18 par­tic­i­pants who attend­ed to one of two simul­ta­ne­ous­ly pre­sent­ed sto­ries, while the SNR between the two talk­ers var­ied dynam­i­cal­ly between +6 and −6 dB. The neur­al track­ing showed an increas­ing ear­ly-to-late atten­tion-biased selec­tiv­i­ty. Impor­tant­ly, acousti­cal­ly dom­i­nant (i.e., loud­er) ignored talk­ers were tracked neu­ral­ly by late involve­ment of fron­to-pari­etal regions, which con­tributed to enhanced neur­al selec­tiv­i­ty. This neur­al selec­tiv­i­ty, by way of rep­re­sent­ing the ignored talk­er, pos­es a mech­a­nis­tic neur­al account of atten­tion under real-life acoustic conditions.

The paper is avail­able here.

Categories
Auditory Cortex EEG / MEG Papers Perception Publications

New paper in press in the Euro­pean Jour­nal of Neu­ro­science: Wöst­mann et al demon­strate that the pow­er of pres­tim­u­lus alpha oscil­la­tions direct­ly relates to con­fi­dence in pitch-discrimination

What is the mech­a­nis­tic rel­e­vance of neur­al alpha oscil­la­tions (~10 Hz) for per­cep­tion? To answer this ques­tion, we analysed EEG data from a task that required par­tic­i­pants to com­pare the pitch of two tones that were, unbe­knownst to par­tic­i­pants, iden­ti­cal. Impor­tant­ly, this task entire­ly removed poten­tial con­founds of vary­ing evi­dence in the stim­u­lus or vary­ing accu­ra­cy. We found that high­er pres­tim­u­lus alpha pow­er cor­re­lat­ed with low­er con­fi­dence in pitch dis­crim­i­na­tion. These results demon­strate that the rela­tion­ship of pres­tim­u­lus alpha pow­er and deci­sion con­fi­dence is direct in nature and, that it shows up in the audi­to­ry modal­i­ty sim­i­lar to what has been shown before in vision and somatosen­sa­tion. Our find­ings sup­port the view that low­er pres­tim­u­lus alpha pow­er enhances neur­al base­line excitability.

The paper is avail­able as preprint here.

Categories
Attention Auditory Cortex Brain stimulation Papers Perception Publications

New paper in press in JASA: Kre­it­e­wolf et al. on the role of voice-fea­ture con­ti­nu­ity for cock­tail-par­ty listening

Oble­ser­lab post­doc Jens Kre­it­e­wolf is in press in The Jour­nal of the Acousti­cal Soci­ety of America!

Togeth­er with our col­leagues, Marc Schön­wies­ner (Montreal/Leipzig), Samuel Math­ias (Yale), and Régis Tra­peau (Montreal/Marseille), we inves­ti­gat­ed the roles of two of the most salient voice fea­tures, glot­tal-pulse rate (GPR) and vocal-tract length (VTL), for per­cep­tu­al group­ing in the cock­tail par­ty. Using care­ful­ly con­trolled stim­uli, we show that lis­ten­ers exploit con­ti­nu­ity in both voice fea­tures to solve the cock­tail-par­ty prob­lem, but that VTL con­ti­nu­ity plays a stronger role for per­cep­tu­al group­ing than GPR con­ti­nu­ity. Our find­ings are in line with the dif­fer­en­tial impor­tance of VTL and GPR for the iden­ti­fi­ca­tion of nat­ur­al talk­ers and have clin­i­cal­ly rel­e­vant impli­ca­tions for cock­tail-par­ty lis­ten­ing in cochlear-implant users.

Data were record­ed using the Dome at BRAMS dur­ing Jens’ ACN Eras­mus Mundus exchange in Montreal.

The paper is avail­able as preprint:

https://www.biorxiv.org/content/early/2018/07/30/379545

 

Categories
Auditory Cortex Auditory Perception Auditory Speech Processing Hearing Loss Papers Perception Publications Speech

New paper in Ear and Hear­ing: Erb, Lud­wig, Kunke, Fuchs & Obleser on speech com­pre­hen­sion with a cochlear implant

We are excit­ed to share the results from our col­lab­o­ra­tion with the Cochlea Implant Cen­ter Leipzig: AC post­doc Julia Erb’s new paper on how 4‑Hz mod­u­la­tion sen­si­tiv­i­ty can inform us on 6‑month speech com­pre­hen­sion out­come in cochlear implants.

Erb J, Lud­wig AA, Kunke D, Fuchs M, & Obleser J (2018). Tem­po­ral sen­si­tiv­i­ty mea­sured short­ly after cochlear implan­ta­tion pre­dicts six-month speech recog­ni­tion outcome

Now avail­able online:

https://insights.ovid.com/crossref?an=00003446–900000000-98942

Abstract:

Objec­tives:

Psy­choa­coustic tests assessed short­ly after cochlear implan­ta­tion are use­ful pre­dic­tors of the reha­bil­i­ta­tive speech out­come. While large­ly inde­pen­dent, both spec­tral and tem­po­ral res­o­lu­tion tests are impor­tant to pro­vide an accu­rate pre­dic­tion of speech recog­ni­tion. How­ev­er, rapid tests of tem­po­ral sen­si­tiv­i­ty are cur­rent­ly lack­ing. Here, we pro­pose a sim­ple ampli­tude mod­u­la­tion rate dis­crim­i­na­tion (AMRD) par­a­digm that is val­i­dat­ed by pre­dict­ing future speech recog­ni­tion in adult cochlear implant (CI) patients.

Design:

In 34 new­ly implant­ed patients, we used an adap­tive AMRD par­a­digm, where broad­band noise was mod­u­lat­ed at the speech-rel­e­vant rate of ~4 Hz. In a lon­gi­tu­di­nal study, speech recog­ni­tion in qui­et was assessed using the closed-set Freiburg­er num­ber test short­ly after cochlear implan­ta­tion (t0) as well as the open-set Freiburg­er mono­syl­lab­ic word test 6 months lat­er (t6).

Results:

Both AMRD thresh­olds at t0 (r = –0.51) and speech recog­ni­tion scores at t0 (r = 0.56) pre­dict­ed speech recog­ni­tion scores at t6. How­ev­er, AMRD and speech recog­ni­tion at t0 were uncor­re­lat­ed, sug­gest­ing that those mea­sures cap­ture par­tial­ly dis­tinct per­cep­tu­al abil­i­ties. A mul­ti­ple regres­sion mod­el pre­dict­ing 6‑month speech recog­ni­tion out­come with deaf­ness dura­tion and speech recog­ni­tion at t0 improved from adjust­ed R2 = 0.30 to adjust­ed R2 = 0.44 when AMRD thresh­old was added as a predictor.

Con­clu­sions:

These find­ings iden­ti­fy AMRD thresh­olds as a reli­able, nonre­dun­dant pre­dic­tor above and beyond estab­lished speech tests for CI out­come. This AMRD test could poten­tial­ly be devel­oped into a rapid clin­i­cal tem­po­ral-res­o­lu­tion test to be inte­grat­ed into the post­op­er­a­tive test bat­tery to improve the reli­a­bil­i­ty of speech out­come prognosis.