web analytics
Categories
Auditory Cortex Auditory Neuroscience Auditory Perception Auditory Speech Processing Degraded Acoustics Executive Functions fMRI Noise-Vocoded Speech Papers Perception Publications Speech

New paper out: Erb, Hen­ry, Eis­ner & Obleser — Jour­nal of Neuroscience

We are proud to announce that PhD stu­dent Julia Erb just came out with a paper issued in Jour­nal  of Neu­ro­science:

The Brain Dynam­ics of Rapid Per­cep­tu­al Adap­ta­tion to Adverse Lis­ten­ing Conditions

Effects of adaptation

Grab it here:

Abstract:

Lis­ten­ers show a remark­able abil­i­ty to quick­ly adjust to degrad­ed speech input. Here, we aimed to iden­ti­fy the neur­al mech­a­nisms of such short-term per­cep­tu­al adap­ta­tion. In a sparse-sam­pling, car­diac-gat­ed func­tion­al mag­net­ic res­o­nance imag­ing (fMRI) acqui­si­tion, human lis­ten­ers heard and repeat­ed back 4‑band-vocod­ed sentences 

Ref­er­ences

  • Erb J, Hen­ry MJ, Eis­ner F, Obleser J. The brain dynam­ics of rapid per­cep­tu­al adap­ta­tion to adverse lis­ten­ing con­di­tions. J Neu­rosci. 2013 Jun 26;33(26):10688–97. PMID: 23804092. [Open with Read]
Categories
Auditory Working Memory Degraded Acoustics EEG / MEG Executive Functions Neural Oscillations Noise-Vocoded Speech Papers Publications Speech

New paper out: Obleser et al., The Jour­nal of Neuroscience

Adverse Lis­ten­ing Con­di­tions and Mem­o­ry Load Dri­ve a Com­mon Alpha Oscil­la­to­ry Network

Whether we are engaged in small talk or try­ing to mem­o­rise a tele­phone num­ber — it is our short-term mem­o­ry that ensures we don’t lose track. But what if the very same mem­o­ry gets addi­tion­al­ly taxed because the words to be remem­bered are hard to understand?

Obleser et al., J Neu­rosci 2012: Alpha oscil­la­tions are enhanced both by mem­o­rised dig­its and by the adverse acoustic con­di­tions that these dig­its had been pre­sent­ed in.
Obleser, J., Woest­mann, M., Hell­bernd, N., Wilsch, A. , Maess, B. (2012). Adverse lis­ten­ing con­di­tions and mem­o­ry load dri­ve a com­mon alpha oscil­la­to­ry net­work. Jour­nal of Neu­ro­science. Sep­tem­ber 5, 2012 • 32(36):12376 –12383

Ref­er­ences

  • Obleser J, Wöst­mann M, Hell­bernd N, Wilsch A, Maess B. Adverse lis­ten­ing con­di­tions and mem­o­ry load dri­ve a com­mon α oscil­la­to­ry net­work. J Neu­rosci. 2012 Sep 5;32(36):12376–83. PMID: 22956828. [Open with Read]
Categories
Degraded Acoustics fMRI Noise-Vocoded Speech Papers Publications Speech

New paper in press: Erb et al., Neu­ropsy­cholo­gia [Update]

I am very proud to announce our first paper that was entire­ly planned, con­duct­ed, analysed and writ­ten up since our group has been in exis­tence. Julia joined me as the first PhD stu­dent in Decem­ber 2010, and has since been busy doing awe­some work. Check out her first paper!

Audi­to­ry skills and brain mor­phol­o­gy pre­dict indi­vid­ual dif­fer­ences in adap­ta­tion to degrad­ed speech

Noise-vocod­ed speech is a spec­tral­ly high­ly degrad­ed sig­nal, but it pre­serves the tem­po­ral enve­lope of speech. Lis­ten­ers vary con­sid­er­ably in their abil­i­ty to adapt to this degrad­ed speech sig­nal. Here, we hypoth­e­sized that indi­vid­ual dif­fer­ences in adap­ta­tion to vocod­ed speech should be pre­dictable by non-speech audi­to­ry, cog­ni­tive, and neu­roanatom­i­cal fac­tors. We test­ed eigh­teen nor­mal-hear­ing par­tic­i­pants in a short-term vocod­ed speech-learn­ing par­a­digm (lis­ten­ing to 100 4- band-vocod­ed sen­tences). Non-speech audi­to­ry skills were assessed using ampli­tude mod­u­la­tion (AM) rate dis­crim­i­na­tion, where mod­u­la­tion rates were cen­tered on the speech-rel­e­vant rate of 4 Hz. Work­ing mem­o­ry capac­i­ties were eval­u­at­ed, and struc­tur­al MRI scans were exam­ined for anatom­i­cal pre­dic­tors of vocod­ed speech learn­ing using vox­el-based mor­phom­e­try. Lis­ten­ers who learned faster to under­stand degrad­ed speech showed small­er thresh­olds in the AM dis­crim­i­na­tion task. Anatom­i­cal brain scans revealed that faster learn­ers had increased vol­ume in the left thal­a­mus (pul­v­inar). These results sug­gest that adap­ta­tion to vocod­ed speech ben­e­fits from indi­vid­ual AM dis­crim­i­na­tion skills. This abil­i­ty to adjust to degrad­ed speech is fur­ther­more reflect­ed anatom­i­cal­ly in an increased vol­ume in an area of the thal­a­mus, which is strong­ly con­nect­ed to the audi­to­ry and pre­frontal cor­tex. Thus, indi­vid­ual audi­to­ry skills that are not speech-spe­cif­ic and left thal­a­mus gray mat­ter vol­ume can pre­dict how quick­ly a lis­ten­er adapts to degrad­ed speech. Please be in touch with Julia Erb if you are inter­est­ed in a preprint as soon as we get hold of the final, type­set manuscript.

[Update#1]: Julia has also pub­lished a blog post on her work.

[Update#2] Paper is avail­able here.

Ref­er­ences

  • Erb J, Hen­ry MJ, Eis­ner F, Obleser J. Audi­to­ry skills and brain mor­phol­o­gy pre­dict indi­vid­ual dif­fer­ences in adap­ta­tion to degrad­ed speech. Neu­ropsy­cholo­gia. 2012 Jul;50(9):2154–64. PMID: 22609577. [Open with Read]
Categories
Auditory Working Memory Degraded Acoustics EEG / MEG Events Executive Functions Neural Oscillations Posters Publications

Fur­ther posters at SFN / Neu­ro­science 2011

In addi­tion to the excit­ing con­so­nan­tal mis­match neg­a­tiv­i­ty work Math­ias and Alexan­dra will be show­ing (TUESDAY AM ses­sion, posters UU10 and UU11), we will have the fol­low­ing posters this year. Come by!

Chris Petkov and I are show­ing our brand new data in the TUESDAY PM ses­sion, poster LL14.

I myself will be pre­sent­ing in the WEDNESDAY AM ses­sion, XX15 – more alpha oscil­la­tions in work­ing mem­o­ry under speech degradation.

Final­ly, I also have the plea­sure to be a co-author on Sarah Jessen’s, who is show­ing très cool mul­ti­modal inte­gra­tion data on voic­es and bod­ies under noisy con­di­tions in the WEDNESDAY PM ses­sion, XX15.

Categories
Auditory Speech Processing Degraded Acoustics EEG / MEG Neural Oscillations Noise-Vocoded Speech Papers Publications Speech

New paper accept­ed in Cere­bral Cor­tex [Update]

Obleser, J., Weisz, N. (in press) Sup­pressed alpha oscil­la­tions pre­dict intel­li­gi­bil­i­ty of speech and its acoustic details. Cere­bral Cortex.

[Update]

Paper is avail­able here.

Ref­er­ences

  • Obleser J, Weisz N. Sup­pressed alpha oscil­la­tions pre­dict intel­li­gi­bil­i­ty of speech and its acoustic details. Cereb Cor­tex. 2012 Nov;22(11):2466–77. PMID: 22100354. [Open with Read]
Categories
Auditory Neuroscience Degraded Acoustics Editorial Notes Events fMRI Linguistics Posters Publications

Vis­it us at CNS

UPDATE — The Vol­cano ash that Island is kind­ly sup­ply­ing might pre­vent us from get­ting to Mon­tréal. Let’s see whether we make it until the poster ses­sion starts on Sun­day. But I am slight­ly pes­simistic on that.

 

I am cur­rent­ly quite busy with fin­ish­ing off loads of old data and prepar­ing new adven­tures in audi­to­ry neu­ro­science. Stay tuned for more!

Mean­while, if you have a few-hours stop-over in Mon­tréal, Cana­da next week: Why don’t you come and find us at the Annu­al Meet­ing of the Cog­ni­tive Neu­ro­science Soci­ety.

I will present a col­lab­o­ra­tive effort with old Kon­stanz acquain­tance Dr. Nathan Weisz on brain oscil­la­to­ry mea­sures in degrad­ed speech—a field I feel very strong­ly about cur­rent­ly and which will sure­ly keep me busy for years to come:

Poster D 53 — Spec­tral fea­tures of speech dri­ve the induced EEG brain response: Para­met­ric changes in Alpha- and Theta-band power

Also, our stu­dent Lars Mey­er will present a neat fMRI study we recent­ly ran on real­ly nasty (yet per­fect­ly legal) Ger­man syn­tax and how the brain deals with it under as-nasty (poor, that is) acoustics:

Poster I31When Com­plex Gram­mar Must Pass the Bot­tle­neck of Degrad­ed Acoustics: an fMRI Study.

See you in Montréal!

Categories
Auditory Neuroscience Degraded Acoustics Editorial Notes fMRI Linguistics Papers Publications Speech

New arti­cles

May I humbly point you to three new arti­cles I had the hon­our to be involved in recently.

First­ly, Chris Petkov, Nikos Logo­thetis and I have put togeth­er a very broad overview over what we think is the cur­rent take on pro­cess­ing streams of voice, speech and, more gen­er­al­ly, vocal­i­sa­tion input in pri­mates. It appears in THE NEUROSCIENTIST and is aimed at (sic) neu­ro­sci­en­tists who are not in the lan­guage and audi­tion field on an every­day basis. It goes back all the way to Wer­nicke and also owes a lot to the hard work on func­tion­al and anatom­i­cal path­ways in the pri­mate brain by peo­ple like Jon Kaas, Troy Hack­ett, Josef Rauscheck­er, or Jef­frey Schmahmann.

Sec­ond­ly, Angela Friederi­ci, Son­ja A. Kotz, Sophie Scott and myself have a new arti­cle in press in HUMAN BRAIN MAPPING where we have tried and dis­en­tan­gled the gram­mat­i­cal vio­la­tion effects in speech that Angela had observed ear­li­er in the ante­ri­or supe­ri­or tem­po­ral gyrus and the effects of speech intel­li­gi­bil­i­ty Sophie had clear­ly pin­point­ed in the sul­cus just below. When com­bin­ing these two manip­u­la­tions into one exper­i­men­tal frame­work, the results turned out sur­pris­ing­ly clear-cut! Also, an impor­tant find­ing on the side: While the acti­va­tions we observed are of course bilat­er­al, any kind of true inter­ac­tion of gram­mar and intel­li­gi­bil­i­ty were locat­ed in the left hemi­sphere (both in infe­ri­or frontal and in supe­ri­or tem­po­ral areas). Watch out here for the upcom­ing pre-print.

Final­ly, recent data by Son­ja Kotz and I have some­what scru­ti­nised the way I see the the inter­play of the ante­ri­or and pos­te­ri­or STS, as well as the IFG and, impor­tant­ly, the left angu­lar gyrus (see the fig­ure below show­ing the response behav­iour of the left angu­lar gyrus over var­i­ous lev­els of degra­da­tion as well as seman­tic expectan­cy, with pooled data from the cur­rent as well as a pre­vi­ous study in J Neu­rosci by Obleser et al., 2007). These data, on a fine-tuned cloze-prob­a­bil­i­ty manip­u­la­tion to sen­tences of vary­ing degra­da­tion are avail­able now in CEREBRAL CORTEX. Thanks for you inter­est, and let me know what you think.

 

Ref­er­ences

  • Petkov CI, Logo­thetis NK, Obleser J. Where are the human speech and voice regions, and do oth­er ani­mals have any­thing like them? Neu­ro­sci­en­tist. 2009 Oct;15(5):419–29. PMID: 19516047. [Open with Read]
  • Friederi­ci AD, Kotz SA, Scott SK, Obleser J. Dis­en­tan­gling syn­tax and intel­li­gi­bil­i­ty in audi­to­ry lan­guage com­pre­hen­sion. Hum Brain Mapp. 2010 Mar;31(3):448–57. PMID: 19718654. [Open with Read]
  • Obleser J, Kotz SA. Expectan­cy con­straints in degrad­ed speech mod­u­late the lan­guage com­pre­hen­sion net­work. Cereb Cor­tex. 2010 Mar;20(3):633–40. PMID: 19561061. [Open with Read]
Categories
Auditory Neuroscience Auditory Working Memory Clinical relevance Degraded Acoustics Speech

What is it with degrad­ed speech and work­ing memory?

Upcom­ing mon­day, I will present in-house some of my recent rumi­nat­ing on the con­cept of “ver­bal” work­ing mem­o­ry and on-line speech com­pre­hen­sion. It is an ancient issue that received some atten­tion main­ly in the 1980s, in the light of Baddeley’s great (read: testable) work­ing mem­o­ry archi­tec­ture includ­ing the now famous phono­log­i­cal store or buffer.

Now, when we turn to degrad­ed speech (or, degrad­ed hear­ing, for that mat­ter) and want to under­stand how the brain can extract mean­ing from a degrad­ed sig­nal, the debate as to whether or not this requires work­ing mem­o­ry has to be revived.

My main con­cern is that the con­cept of a phono­log­i­cal store always relies on

rep­re­sen­ta­tions […] which […] must, rather, be post-cat­e­gor­i­cal, ‘cen­tral’ rep­re­sen­ta­tions that are func­tion­al­ly remote from more periph­er­al per­cep­tu­al or motoric sys­tems.

Indeed, the use of the term phono­log­i­cal seems to have been delib­er­ate­ly adopt­ed in favor of the terms acoustic or artic­u­la­to­ry (see, e.g., Bad­de­ley, 1992) to indi­cate the abstract nature of the phono­log­i­cal store’s unit of currency.’’

(Jones, Hugh­es, & Mack­en, 2006, p. 266; quot­ed after the worth­while paper by Pa et al.)

But how does the hear­ing sys­tem arrive at such an abstract rep­re­sen­ta­tion when the input is com­pro­mised and less than clear?

I think it all leads to an—at least—twofold under­stand­ing of “work­ing” mem­o­ry in acoustic and speech process­es, each with its own neur­al cor­re­lates, as they sur­face in any brain imag­ing study of lis­ten­ing to (degrad­ed) speech: A pre-cat­e­gor­i­cal, sen­so­ry-based sys­tem, prob­a­bly reflect­ed by acti­va­tions of the planum tem­po­rale that can be tied to com­pen­sato­ry and effort­ful attempts to process the speech signal—and a (more clas­si­cal) post-cat­e­gor­i­cal sys­tem not access­ing acoustic detail any longer and con­nect­ing to long-term mem­o­ry rep­re­sen­ta­tions (phono­log­i­cal and lex­i­cal cat­e­gories) instead.

Stay tuned for more of this.