web analytics
Categories
Auditory Neuroscience Auditory Perception Auditory Speech Processing Clinical relevance Degraded Acoustics Gyrus Angularis Linguistics Noise-Vocoded Speech Papers Perception Psychology Speech

New paper in press: Hartwigsen, Golombek, & Obleser in Cor­tex [UPDATED]

In a col­lab­o­ra­tion with the Uni­ver­si­ty Clin­ic of Leipzig and Prof Dr Gesa Hartwigsen (now Uni­ver­si­ty of Kiel), a new paper is to appear in “Cor­tex”, in the forth­com­ing spe­cial issue on Pre­dic­tion in Speech and Lan­guage, edit­ed by Alessan­dro Tavano and AC alum­nus Math­ias Scharinger.

Repet­i­tive tran­scra­nial mag­net­ic stim­u­la­tion over left angu­lar gyrus mod­u­lates the pre­dictabil­i­ty gain in degrad­ed speech comprehension

Hartwigsen G, Golombek T, & Obleser J.

See abstract
Increased neur­al activ­i­ty in left angu­lar gyrus (AG) accom­pa­nies suc­cess­ful com­pre­hen­sion of acousti­cal­ly degrad­ed but high­ly pre­dictable sen­tences, as pre­vi­ous func­tion­al imag­ing stud­ies have shown. How­ev­er, it remains unclear whether the left AG is causal­ly rel­e­vant for the com­pre­hen­sion of degrad­ed speech. Here, we applied tran­sient vir­tu­al lesions to either the left AG or supe­ri­or pari­etal lobe (SPL, as a con­trol area) with repet­i­tive tran­scra­nial mag­net­ic stim­u­la­tion (rTMS) while healthy vol­un­teers lis­tened to and repeat­ed sen­tences with high- vs. low-pre­dictable end­ings and dif­fer­ent noise vocod­ing lev­els. We expect­ed that rTMS of AG should selec­tive­ly mod­u­late the pre­dictabil­i­ty gain (i.e., the com­pre­hen­sion ben­e­fit from sen­tences with high-pre­dictable end­ings) at a medi­um degra­da­tion lev­el. We found that rTMS of AG indeed reduced the pre­dictabil­i­ty gain at a medi­um degra­da­tion lev­el of 4‑band noise vocod­ing (rel­a­tive to con­trol rTMS of SPL). In con­trast, the behav­ioral per­tur­ba­tion induced by rTMS reversed with increased sig­nal qual­i­ty. Hence, at 8‑band noise vocod­ing, rTMS over AG vs. SPL increased the over­all pre­dictabil­i­ty gain. Togeth­er, these results show that the degree of the rTMS inter­fer­ence depend­ed joint­ly on sig­nal qual­i­ty and pre­dictabil­i­ty. Our results pro­vide the first causal evi­dence that the left AG is a crit­i­cal node for facil­i­tat­ing speech com­pre­hen­sion in chal­leng­ing lis­ten­ing conditions.

Screen Shot 2014-09-11 at 21.19.17

Check it out soon!

Ref­er­ences

  • Hartwigsen G1, Golombek T2, Obleser J3. Repet­i­tive tran­scra­nial mag­net­ic stim­u­la­tion over left angu­lar gyrus mod­u­lates the pre­dictabil­i­ty gain in degrad­ed speech com­pre­hen­sion. Cor­tex. 2014 Sep 18. PMID: 25444577. [Open with Read]
Categories
fMRI Papers Perception Publications

New paper in press: Her­rmann et al in NeuroImage

Dr Björn Her­rmann did it again, and is in press at Neu­roIm­age with Her­rmann, Hen­ry, Scharinger, & Obleser on

Sup­ple­men­tary motor area acti­va­tions pre­dict indi­vid­ual dif­fer­ences in tem­po­ral-change sen­si­tiv­i­ty and its illu­so­ry distortions

See abstract
Per­cep­tion of time and tem­po­ral change are crit­i­cal for human cog­ni­tion. Yet, per­cep­tion of tem­po­ral change is sus­cep­ti­ble to con­tex­tu­al influ­ences such as changes of a sound’s pitch. Using func­tion­al mag­net­ic res­o­nance imag­ing (fMRI), the cur­rent study aimed to inves­ti­gate per­cep­tion of tem­po­ral rate change and pitch-induced illu­so­ry dis­tor­tions. In a 6 × 6 design, human par­tic­i­pants (N=19) lis­tened to fre­quen­cy-mod­u­lat­ed sounds (~4 Hz) that var­ied over time in both mod­u­la­tion rate and pitch. Par­tic­i­pants judged the direc­tion of rate change (‘speed­ing up’ vs. ‘slow­ing down’), while ignor­ing changes in pitch. Behav­ioral­ly, rate judg­ments were strong­ly biased by pitch changes: Par­tic­i­pants per­ceived rate to slow down when pitch decreased and to speed up when pitch increased (‘rate-change illu­sion’). The fMRI data revealed acti­va­tion increas­es with increas­ing task dif­fi­cul­ty in pre-SMA, left puta­men, and right IFG/insula. Impor­tant­ly, acti­va­tion in pre-SMA was linked to the per­cep­tu­al sen­si­tiv­i­ty to dis­crim­i­nate rate changes and, togeth­er with the left puta­men, to rel­a­tive reduc­tions in sus­cep­ti­bil­i­ty to pitch-induced illu­so­ry dis­tor­tions. Right IFG/insula acti­va­tions, how­ev­er, only scaled with task dif­fi­cul­ty. These data offer a dis­tinc­tion between regions whose acti­va­tions scale with per­cep­tu­al sen­si­tiv­i­ty to fea­tures of time (pre-SMA) and those that more gen­er­al­ly sup­port behav­ing in dif­fi­cult lis­ten­ing con­di­tions (IFG/insula). Hence, the data under­score that indi­vid­ual dif­fer­ences in time per­cep­tion can be relat­ed to dif­fer­ent pat­terns of neu­ro­func­tion­al activation.

Ref­er­ences

  • Her­rmann B1, Hen­ry MJ2, Scharinger M2, Obleser J2. Sup­ple­men­tary motor area acti­va­tions pre­dict indi­vid­ual dif­fer­ences in tem­po­ral-change sen­si­tiv­i­ty and its illu­so­ry dis­tor­tions. Neu­roim­age. 2014 Jul 23;101C:370–379. PMID: 25064666. [Open with Read]
Categories
Editorial Notes

Hooray for Dr. des. Julia Erb …

… the first PhD stu­dent from the Audi­to­ry Cog­ni­tion group, start­ed in Jan­u­ary 2011, to defend her PhD (Dr. rer. nat.) thesis.

Julia pre­sent­ed her work last thurs­day to the defense com­mit­tee, and will now move on to a great Post­doc posi­tion – it seems she will have a hard choice between two great options.

Thank you Julia for the great sci­ence and the great fun you brought to the lab! And thanks to the exter­nal exam­in­er as well as to Erich Schröger and all com­mit­tee mem­bers at the Uni­ver­si­ty of Leipzig, who kind­ly col­lab­o­rate on grad­u­at­ing our students.

 

We wish you all the best

-Mem­bers of AC

Categories
Degraded Acoustics EEG / MEG fMRI Linguistics Papers Publications

New paper out: Simul­ta­ne­ous fMRI–EEG in audi­to­ry cat­e­go­riza­tion by Scharinger et al.

Con­grat­u­la­tions to Obleser lab alum­nus Math­ias Scharinger who this week pub­lished our joint work on simul­ta­ne­ous fMRI–EEG in Fron­tiers in Human Neuroscience!

Simul­ta­ne­ous EEG-fMRI brain sig­na­tures of audi­to­ry cue utilization

by Scharinger, Her­rmann, Nier­haus, & Obleser

See abstract
Opti­mal uti­liza­tion of acoustic cues dur­ing audi­to­ry cat­e­go­riza­tion is a vital skill, par­tic­u­lar­ly when infor­ma­tive cues become occlud­ed or degrad­ed. Con­se­quent­ly, the acoustic envi­ron­ment requires flex­i­ble choos­ing and switch­ing amongst avail­able cues. The present study tar­gets the brain func­tions under­ly­ing such changes in cue uti­liza­tion. Par­tic­i­pants per­formed a cat­e­go­riza­tion task with imme­di­ate feed­back on acoustic stim­uli from two cat­e­gories that var­ied in dura­tion and spec­tral prop­er­ties, while we simul­ta­ne­ous­ly record­ed Blood Oxy­gena­tion Lev­el Depen­dent (BOLD) respons­es in fMRI and elec­troen­cephalo­grams (EEGs). In the first half of the exper­i­ment, cat­e­gories could be best dis­crim­i­nat­ed by spec­tral prop­er­ties. Halfway through the exper­i­ment, spec­tral degra­da­tion ren­dered the stim­u­lus dura­tion the more infor­ma­tive cue. Behav­ioral­ly, degra­da­tion decreased the like­li­hood of uti­liz­ing spec­tral cues. Spec­tral­ly degrad­ing the acoustic sig­nal led to increased alpha pow­er com­pared to non­de­grad­ed stim­uli. The EEG-informed fMRI analy­ses revealed that alpha pow­er cor­re­lat­ed with BOLD changes in infe­ri­or pari­etal cor­tex and right pos­te­ri­or supe­ri­or tem­po­ral gyrus (includ­ing planum tem­po­rale). In both areas, spec­tral degra­da­tion led to a weak­er cou­pling of BOLD response to behav­ioral uti­liza­tion of the spec­tral cue. These data pro­vide con­verg­ing evi­dence from behav­ioral mod­el­ing, elec­tro­phys­i­ol­o­gy, and hemo­dy­nam­ics that (a) increased alpha pow­er medi­ates the inhi­bi­tion of unin­for­ma­tive (here spec­tral) stim­u­lus fea­tures, and that (b) the pari­etal atten­tion net­work sup­ports opti­mal cue uti­liza­tion in audi­to­ry cat­e­go­riza­tion. The results high­light the com­plex cor­ti­cal pro­cess­ing of audi­to­ry cat­e­go­riza­tion under real­is­tic lis­ten­ing chal­lenges.

Ref­er­ences

  • Scharinger M1, Her­rmann B1, Nier­haus T2, Obleser J1. Simul­ta­ne­ous EEG-fMRI brain sig­na­tures of audi­to­ry cue uti­liza­tion. Front Neu­rosci. 2014 Jun 4;8:137. PMID: 24926232. [Open with Read]
Categories
Ageing Auditory Perception Degraded Acoustics EEG / MEG Hearing Loss Neural Oscillations Papers Perception Publications Speech

Strauß strikes again — fron­tiers in Human Neuroscience

It’s only a week ago that we updat­ed you about Antje’s lat­est pub­li­ca­tion at Neu­roIm­age. Today, there is a anoth­er one com­ing in; Antje’s, Mal­te’s & Jonas’ per­spec­tive arti­cle on cor­ti­cal alpha oscil­la­tions is in press at fron­tiers in HUMAN NEUROSCIENCE.

Cor­ti­cal alpha oscil­la­tions as a tool for audi­to­ry selec­tive inhibition

— Strauß, Wöst­mann & Obleser

See abstract
Lis­ten­ing to speech is often demand­ing because of sig­nal degra­da­tions and the pres­ence of dis­tract­ing sounds (i.e., “noise”). The ques­tion how the brain achieves the task of extract­ing only rel­e­vant infor­ma­tion from the mix­ture of sounds reach­ing the ear (i.e., “cock­tail par­ty prob­lem”) is still open. In anal­o­gy to recent find­ings in vision, we pro­pose cor­ti­cal alpha (~10 Hz) oscil­la­tions mea­sur­able using M/EEG as a piv­otal mech­a­nism to selec­tive­ly inhib­it the pro­cess­ing of noise to improve audi­to­ry selec­tive atten­tion to task-rel­e­vant sig­nals. We review ini­tial evi­dence of enhanced alpha activ­i­ty in selec­tive lis­ten­ing tasks, sug­gest­ing a sig­nif­i­cant role of alpha-mod­u­lat­ed noise sup­pres­sion in speech. We dis­cuss the impor­tance of dis­so­ci­at­ing between noise inter­fer­ence in the audi­to­ry periph­ery (i.e., ener­getic mask­ing) and noise inter­fer­ence with more cen­tral cog­ni­tive aspects of speech pro­cess­ing (i.e., infor­ma­tion­al mask­ing). Final­ly, we point out the adverse effects of age-relat­ed hear­ing loss and/or cog­ni­tive decline on audi­to­ry selec­tive inhi­bi­tion. With this per­spec­tive arti­cle, we set the stage for future stud­ies on the inhibito­ry role of alpha oscil­la­tions for speech pro­cess­ing in chal­leng­ing lis­ten­ing situations.

Ref­er­ences

  • Strauß A1, Wöst­mann M2, Obleser J1. Cor­ti­cal alpha oscil­la­tions as a tool for audi­to­ry selec­tive inhi­bi­tion. Front Hum Neu­rosci. 2014 May 28;8:350. PMID: 24904385. [Open with Read]
Categories
EEG / MEG Linguistics Neural Oscillations Papers Publications

New paper out: Dis­so­ci­a­tion of alpha and theta oscil­la­tions Strauß, Kotz, Scharinger, Obleser

We are very hap­py to announce that PhD stu­dent Antje Strauß got her paper

Alpha and theta brain oscil­la­tions index dis­so­cia­ble process­es in spo­ken word recognition

accept­ed at Neu­roIm­age. Con­grat­u­la­tions! Find her paper here.

See the Abstract
Slow neur­al oscil­la­tions (∼ 1–15 Hz) are thought to orches­trate the neur­al process­es of spo­ken lan­guage com­pre­hen­sion. How­ev­er, func­tion­al sub­di­vi­sions with­in this broad range of fre­quen­cies are dis­put­ed, with most stud­ies hypoth­e­siz­ing only about sin­gle fre­quen­cy bands. The present study uti­lizes an estab­lished par­a­digm of spo­ken word recog­ni­tion (lex­i­cal deci­sion) to test the hypoth­e­sis that with­in the slow neur­al oscil­la­to­ry fre­quen­cy range, dis­tinct func­tion­al sig­na­tures and cor­ti­cal net­works can be iden­ti­fied at least for theta- (∼ 3–7 Hz) and alpha-fre­quen­cies (∼ 8–12 Hz). Lis­ten­ers per­formed an audi­to­ry lex­i­cal deci­sion task on a set of items that formed a word–pseudoword con­tin­u­um: rang­ing from (1) real words over (2) ambigu­ous pseu­do­words (devi­at­ing from real words only in one vow­el; com­pa­ra­ble to nat­ur­al mis­pro­nun­ci­a­tions in speech) to (3) pseu­do­words (clear­ly devi­at­ing from real words by ran­dom­ized syl­la­bles). By means of time–frequency analy­sis and spa­tial fil­ter­ing, we observed a dis­so­ci­a­tion into dis­tinct but simul­ta­ne­ous pat­terns of alpha pow­er sup­pres­sion and theta pow­er enhance­ment. Alpha exhib­it­ed a para­met­ric sup­pres­sion as items increas­ing­ly matched real words,in line with low­ered func­tion­al inhi­bi­tion in a left-dom­i­nant lex­i­cal pro­cess­ing net­work for more word-like input. Simul­ta­ne­ous­ly, theta pow­er in a bilat­er­al fron­to-tem­po­ral net­work was selec­tive­ly enhanced for ambigu­ous pseu­do­words only. Thus, enhanced alpha pow­er can neu­ral­ly “gate” lex­i­cal inte­gra­tion, while enhanced theta pow­er might index func­tion­al­ly more spe­cif­ic ambi­gu­i­ty-res­o­lu­tion process­es. To this end, a joint analy­sis of both fre­quen­cy bands pro­vides neur­al evi­dence for par­al­lel process­es in achiev­ing spo­ken word recognition.

Ref­er­ences

  • Strauβ A1, Kotz SA2, Scharinger M3, Obleser J3. Alpha and theta brain oscil­la­tions index dis­so­cia­ble process­es in spo­ken word recog­ni­tion. Neu­roim­age. 2014 Apr 18. PMID: 24747736. [Open with Read]
Categories
Auditory Perception Auditory Working Memory Events fMRI Neural Oscillations Neural Phase Posters

Come and find us at CNS 2014 in Boston this weekend

The Obleser lab will be pre­sent­ing four posters at this year’s Annu­al Meet­ing of the Cog­ni­tive Neu­ro­science Soci­ety in Boston.

If you hap­pen to be there, come check us out!

A125Hemo­dy­nam­ic sig­na­tures of (mis-)perceiving tem­po­ral change
Her­rmann, Bjoern

C63Tem­po­ral pre­dictabil­i­ty atten­u­ates decay in sen­so­ry memory
Wilsch, Anna

D54Stim­u­lus dis­crim­inabil­i­ty and pre­dic­tive­ness mod­u­late alpha oscil­la­tions in a per­cep­tu­al­ly demand­ing mem­o­ry task
Wöst­mann, Malte

D130Slow acoustic fluc­tu­a­tions entrain low-fre­quen­cy neur­al oscil­la­tions and deter­mine psy­choa­coustic performance
Hen­ry, Molly

Categories
Editorial Notes

Wel­come Sung-Joo Lim & Alex Brandmeyer

We wel­come Sung-Joo Lim (KR) & Alex Brand­mey­er (US) as new post­doc­tor­al researchers in the group.

Sung-Joo very recent­ly received her Ph.D. from the Carnegie Mel­lon Uni­ver­si­ty, Pitts­burgh, PA (US), after

Inves­ti­gat­ing the Neur­al Basis of Sound Cat­e­go­ry Learn­ing with­in a Nat­u­ral­is­tic Inci­den­tal Task

See her abstract
Adults have noto­ri­ous dif­fi­cul­ty learn­ing non-native speech cat­e­gories even with exten­sive train­ing with stan­dard tasks pro­vid­ing explic­it tri­al-by-tri­al feed­back. Recent research in gen­er­al audi­to­ry cat­e­go­ry learn­ing demon­strates that videogame-based train­ing, which incor­po­rates fea­tures that mod­el the nat­u­ral­is­tic learn­ing envi­ron­ment, leads to fast and robust learn­ing of sound cat­e­gories. Unlike stan­dard tasks, the videogame par­a­digm does not require overt cat­e­go­riza­tion of or explic­it atten­tion to sounds; lis­ten­ers learn sounds inci­den­tal­ly as the game encour­ages the func­tion­al use of sounds in an envi­ron­ment, in which actions and feed­back are tight­ly linked to task suc­cess. These char­ac­ter­is­tics may engage rein­force­ment learn­ing sys­tems, which can poten­tial­ly gen­er­ate inter­nal feed­back sig­nals from the stria­tum. How­ev­er, the influ­ence of stri­atal sig­nals on per­cep­tu­al learn­ing and plas­tic­i­ty online dur­ing train­ing has yet to be estab­lished. This dis­ser­ta­tion work focus­es on the pos­si­bil­i­ty that this type of train­ing can lead to behav­ioral learn­ing of non-native speech cat­e­gories, and on the inves­ti­ga­tion of neur­al process­es pos­tu­lat­ed to be sig­nif­i­cant for induc­ing inci­den­tal learn­ing of sound cat­e­gories with­in the more nat­u­ral­is­tic train­ing envi­ron­ment by using fMRI. Over­all, our results sug­gest that reward-relat­ed sig­nals from the stria­tum influ­ence per­cep­tu­al rep­re­sen­ta­tions in regions asso­ci­at­ed with the pro­cess­ing of reli­able infor­ma­tion that can improve per­for­mance with­in a nat­u­ral­is­tic learn­ing task.

Alex very recent­ly received his Ph.D. from the Rad­boud Uni­ver­si­ty of Nijmegen (NL), address­ing his the­sis top­ic with

Audi­to­ry brain-com­put­er inter­faces for per­cep­tu­al learn­ing in speech and music

See his abstract
We per­ceive the sounds in our envi­ron­ment, such as lan­guage and music, effort­less­ly and trans­par­ent­ly, unaware of the com­plex neu­ro­phys­i­o­log­i­cal mech­a­nisms that under­lie our expe­ri­ences. Using elec­troen­cephalog­ra­phy (EEG) and tech­niques from the field of machine learn­ing, it’s pos­si­ble to mon­i­tor our per­cep­tion of the audi­to­ry world in real-time and to pin­point indi­vid­ual dif­fer­ences in per­cep­tu­al abil­i­ties relat­ed to native-lan­guage back­ground and audi­to­ry expe­ri­ence. Going fur­ther, these same meth­ods can be used to pro­vide indi­vid­u­als with neu­ro­feed­back dur­ing audi­to­ry per­cep­tion as a means of mod­u­lat­ing brain respons­es to sounds, with the even­tu­al aim of incor­po­rat­ing these meth­ods into edu­ca­tion­al set­tings to aid in audi­to­ry per­cep­tu­al learning.

Wish­ing you all the best.