web analytics
Categories
Ageing Auditory Neuroscience Auditory Speech Processing Clinical relevance Degraded Acoustics Executive Functions fMRI Hearing Loss Noise-Vocoded Speech Papers Publications Speech

New paper in press: Erb & Obleser, Fron­tiers in Sys­tems Neuroscience

Julia Erb just got accept­ed the third study of her PhD project,

Upreg­u­la­tion of cog­ni­tive con­trol net­works in old­er adults’ speech comprehension

It will appear in Fron­tiers in Sys­tems Neu­ro­science soon.

The data are an exten­sion (in old­er adults) of Julia’s Jour­nal of Neu­ro­science paper ear­li­er this year.

Ref­er­ences

  • Erb J, Obleser J. Upreg­u­la­tion of cog­ni­tive con­trol net­works in old­er adults’ speech com­pre­hen­sion. Front Syst Neu­rosci. 2013 Dec 24;7:116. PMID: 24399939. [Open with Read]
Categories
Auditory Neuroscience Auditory Perception Auditory Working Memory Executive Functions fMRI Papers Perception Publications

New paper has been pub­lished in Cere­bral Cor­tex by Hen­ry, Her­rmann, & Obleser

When we lis­ten to sounds like speech and music, we have to make sense of dif­fer­ent acoustic fea­tures that vary simul­ta­ne­ous­ly along mul­ti­ple time scales. This means that we, as lis­ten­ers, have to selec­tive­ly attend to, but at the same time selec­tive­ly ignore, sep­a­rate but inter­twined fea­tures of a stimulus.

Brain regions associated with selective attending to and selective ignoring of temporal stimulus features.
Brain regions asso­ci­at­ed with selec­tive attend­ing to and selec­tive ignor­ing of tem­po­ral stim­u­lus fea­tures. (more)

A new­ly pub­lished fMRI study by Mol­ly Hen­ry, Björn Her­rmann, and Jonas Obleser found a net­work of brain regions that respond­ed oppo­site­ly to iden­ti­cal stim­u­lus char­ac­ter­is­tics depend­ing on whether they were rel­e­vant or irrel­e­vant, even when both stim­u­lus fea­tures involved atten­tion to time and tem­po­ral features.

You can check out the arti­cle here:

http://cercor.oxfordjournals.org/content/early/2013/08/23/cercor.bht240.full

Ref­er­ences

  • Hen­ry MJ, Her­rmann B, Obleser J. Selec­tive Atten­tion to Tem­po­ral Fea­tures on Nest­ed Time Scales. Cereb Cor­tex. 2013 Aug 26. PMID: 23978652. [Open with Read]
Categories
Auditory Cortex Auditory Neuroscience Auditory Perception Auditory Speech Processing Degraded Acoustics Executive Functions fMRI Noise-Vocoded Speech Papers Perception Publications Speech

New paper out: Erb, Hen­ry, Eis­ner & Obleser — Jour­nal of Neuroscience

We are proud to announce that PhD stu­dent Julia Erb just came out with a paper issued in Jour­nal  of Neu­ro­science:

The Brain Dynam­ics of Rapid Per­cep­tu­al Adap­ta­tion to Adverse Lis­ten­ing Conditions

Effects of adaptation

Grab it here:

Abstract:

Lis­ten­ers show a remark­able abil­i­ty to quick­ly adjust to degrad­ed speech input. Here, we aimed to iden­ti­fy the neur­al mech­a­nisms of such short-term per­cep­tu­al adap­ta­tion. In a sparse-sam­pling, car­diac-gat­ed func­tion­al mag­net­ic res­o­nance imag­ing (fMRI) acqui­si­tion, human lis­ten­ers heard and repeat­ed back 4‑band-vocod­ed sentences 

Ref­er­ences

  • Erb J, Hen­ry MJ, Eis­ner F, Obleser J. The brain dynam­ics of rapid per­cep­tu­al adap­ta­tion to adverse lis­ten­ing con­di­tions. J Neu­rosci. 2013 Jun 26;33(26):10688–97. PMID: 23804092. [Open with Read]
Categories
Degraded Acoustics fMRI Noise-Vocoded Speech Papers Publications Speech

New paper in press: Erb et al., Neu­ropsy­cholo­gia [Update]

I am very proud to announce our first paper that was entire­ly planned, con­duct­ed, analysed and writ­ten up since our group has been in exis­tence. Julia joined me as the first PhD stu­dent in Decem­ber 2010, and has since been busy doing awe­some work. Check out her first paper!

Audi­to­ry skills and brain mor­phol­o­gy pre­dict indi­vid­ual dif­fer­ences in adap­ta­tion to degrad­ed speech

Noise-vocod­ed speech is a spec­tral­ly high­ly degrad­ed sig­nal, but it pre­serves the tem­po­ral enve­lope of speech. Lis­ten­ers vary con­sid­er­ably in their abil­i­ty to adapt to this degrad­ed speech sig­nal. Here, we hypoth­e­sized that indi­vid­ual dif­fer­ences in adap­ta­tion to vocod­ed speech should be pre­dictable by non-speech audi­to­ry, cog­ni­tive, and neu­roanatom­i­cal fac­tors. We test­ed eigh­teen nor­mal-hear­ing par­tic­i­pants in a short-term vocod­ed speech-learn­ing par­a­digm (lis­ten­ing to 100 4- band-vocod­ed sen­tences). Non-speech audi­to­ry skills were assessed using ampli­tude mod­u­la­tion (AM) rate dis­crim­i­na­tion, where mod­u­la­tion rates were cen­tered on the speech-rel­e­vant rate of 4 Hz. Work­ing mem­o­ry capac­i­ties were eval­u­at­ed, and struc­tur­al MRI scans were exam­ined for anatom­i­cal pre­dic­tors of vocod­ed speech learn­ing using vox­el-based mor­phom­e­try. Lis­ten­ers who learned faster to under­stand degrad­ed speech showed small­er thresh­olds in the AM dis­crim­i­na­tion task. Anatom­i­cal brain scans revealed that faster learn­ers had increased vol­ume in the left thal­a­mus (pul­v­inar). These results sug­gest that adap­ta­tion to vocod­ed speech ben­e­fits from indi­vid­ual AM dis­crim­i­na­tion skills. This abil­i­ty to adjust to degrad­ed speech is fur­ther­more reflect­ed anatom­i­cal­ly in an increased vol­ume in an area of the thal­a­mus, which is strong­ly con­nect­ed to the audi­to­ry and pre­frontal cor­tex. Thus, indi­vid­ual audi­to­ry skills that are not speech-spe­cif­ic and left thal­a­mus gray mat­ter vol­ume can pre­dict how quick­ly a lis­ten­er adapts to degrad­ed speech. Please be in touch with Julia Erb if you are inter­est­ed in a preprint as soon as we get hold of the final, type­set manuscript.

[Update#1]: Julia has also pub­lished a blog post on her work.

[Update#2] Paper is avail­able here.

Ref­er­ences

  • Erb J, Hen­ry MJ, Eis­ner F, Obleser J. Audi­to­ry skills and brain mor­phol­o­gy pre­dict indi­vid­ual dif­fer­ences in adap­ta­tion to degrad­ed speech. Neu­ropsy­cholo­gia. 2012 Jul;50(9):2154–64. PMID: 22609577. [Open with Read]
Categories
Auditory Cortex Auditory Speech Processing fMRI Papers Publications Speech

New paper out: McGet­ti­gan et al., Neuropsychologia


Last years’s lab guest and long-time col­lab­o­ra­tor Car­olyn McGet­ti­gan has put out anoth­er one:

Speech com­pre­hen­sion aid­ed by mul­ti­ple modal­i­ties: Behav­iour­al and neur­al interactions

I had the plea­sure to be involved ini­tial­ly, when Car­olyn con­ceived a lot of this, and when things came togeth­er in the end. Car­olyn nice­ly demon­strates how vary­ing audio and visu­al clar­i­ty comes togeth­er with the seman­tic ben­e­fits a lis­ten­er can get from the famous Kalikow SPIN (speech in noise) sen­tences. The data high­light pos­te­ri­or STS and the fusiform gyrus as sites for con­ver­gence of audi­to­ry, visu­al and lin­guis­tic information.

Check it out!

Ref­er­ences

  • McGet­ti­gan C, Faulkn­er A, Altarel­li I, Obleser J, Baver­stock H, Scott SK. Speech com­pre­hen­sion aid­ed by mul­ti­ple modal­i­ties: behav­iour­al and neur­al inter­ac­tions. Neu­ropsy­cholo­gia. 2012 Apr;50(5):762–76. PMID: 22266262. [Open with Read]
Categories
Auditory Neuroscience Auditory Perception fMRI Linguistics Papers Publications Speech

New paper out: “Upstream del­e­ga­tion” for pro­cess­ing of com­plex syn­tax under degrad­ed acoustics

A new paper is about to appear in Neu­roim­age on the inter­ac­tion of syn­tac­tic com­plex­i­ty and acoustic degradation.

It is writ­ten by myself, PhD stu­dent Lars Mey­er, and Angela Friederi­ci. In a way, the paper brings togeth­er one of Angela’s main research ques­tions (which brain cir­cuits medi­ate the pro­cess­ing of syn­tax?) with a long-stand­ing inter­est of mine, that is, how do adverse lis­ten­ing sit­u­a­tions affect the com­pre­hen­sion of speech.

The paper is entitled

Dynam­ic assign­ment of neur­al resources in audi­to­ry com­pre­hen­sion of com­plex sentences

The paper first estab­lish­es that acoustic vari­ants of increas­ing­ly com­plex sen­tences essen­tial­ly behave like writ­ten ver­sions of these sentences.
The data then neat­ly show that pro­cess­ing chal­leng­ing (but legal) syn­tax under var­i­ous lev­els of degra­da­tion has a very dif­fer­ent effect on the neur­al cir­cuits involved than prof­it­ing from seman­tics: While the lat­ter has been shown pre­vi­ous­ly to involve more wide­spread, het­ero­modal brain areas, the dou­ble demand of increas­ing­ly com­plex syn­tax and an increas­ing­ly degrad­ed speech sig­nal (from which the com­plex syn­tax has to be parsed) elic­it an “upstream” shift of acti­va­tion back to less abstract pro­cess­ing areas in the supe­ri­or tem­po­ral and prefrontal/frontal cortex.

We ten­ta­tive­ly have termed this process “upstream del­e­ga­tion”. We have also tried and estab­lished a slight­ly unusu­al method to do jus­tice to the fMRI acti­va­tion data: We have includ­ed all z‑scores gath­ered along cer­tain spa­tial dimen­sions, irre­spec­tive of whether they were sub- or suprathresh­old, and have treat­ed them as dis­tri­b­u­tions. Check it out and let us know what you think.

Ref­er­ences

  • Obleser J, Mey­er L, Friederi­ci AD. Dynam­ic assign­ment of neur­al resources in audi­to­ry com­pre­hen­sion of com­plex sen­tences. Neu­roim­age. 2011 Jun 15;56(4):2310–20. PMID: 21421059. [Open with Read]
Categories
Auditory Cortex Auditory Neuroscience fMRI Linguistics Papers Publications Speech

New paper out: Pat­terns of vow­el and con­so­nant sensitivity

Dear fol­low­ers of the slow­ly emerg­ing Obleser lab,
I am glad to present to you a new paper that was pub­lished last week:

Seg­re­ga­tion of vow­els and con­so­nants in human audi­to­ry cor­tex: Evi­dence for dis­trib­uted hier­ar­chi­cal orga­ni­za­tion

by Jonas Obleser, Amber Leaver, John Van­Meter, and Josef P. Rauscheck­er, in Fron­tiers in Psy­chol­o­gy. It was sub­mit­ted to the new sec­tion of Audi­to­ry Cog­ni­tive Neu­ro­science and wil be one of the first papers to appear in this section.

The paper present evi­dence from a small-vox­el 3T study we scanned in George­town a few years ago that

  • nat­u­ral­ly coar­tic­u­lat­ed syl­la­bles like /de:/ or /gu:/ con­tain enough infor­ma­tion for a machine learn­ing algo­rithm to tell vow­el cat­e­gories (front vs back) from each oth­er, and also stop con­so­nant cat­e­gories (/d/ vs /g/) – across participants!
  • with a sur­pris­ing­ly sparse over­lap across sub­ar­eas of the supe­ri­or tem­po­ral cor­tex, how­ev­er and
  • data from the left ante­ri­or region of inter­est (defined as left and ante­ri­or of a prob­a­bilis­tic pri­ma­ry audi­to­ry cor­tex def­i­n­i­tion sen­su Rademach­er et al., 2001) appears par­tic­u­lar­ly “geared” towards these speech–from-speech classifications.

The paper was edit­ed by Mic­ah Mur­ray and received very con­struc­tive reviews from Elia Formisano and Lee Miller (a fea­ture of Fron­tiers jour­nals is to dis­close the peer review­ers after accep­tance; nice fea­ture, I think.)

The final pdf is avail­able online now, and it seems that the Pubmed list­ings for the Fron­tiers in psy­chol­o­gy jour­nal are about to hap­pen very soon.

Ref­er­ences

  • Obleser J, Leaver AM, Van­meter J, Rauscheck­er JP. Seg­re­ga­tion of vow­els and con­so­nants in human audi­to­ry cor­tex: evi­dence for dis­trib­uted hier­ar­chi­cal orga­ni­za­tion. Front Psy­chol. 2010 Dec 24;1:232. PMID: 21738513. [Open with Read]
Categories
Auditory Neuroscience Degraded Acoustics Editorial Notes Events fMRI Linguistics Posters Publications

Vis­it us at CNS

UPDATE — The Vol­cano ash that Island is kind­ly sup­ply­ing might pre­vent us from get­ting to Mon­tréal. Let’s see whether we make it until the poster ses­sion starts on Sun­day. But I am slight­ly pes­simistic on that.

 

I am cur­rent­ly quite busy with fin­ish­ing off loads of old data and prepar­ing new adven­tures in audi­to­ry neu­ro­science. Stay tuned for more!

Mean­while, if you have a few-hours stop-over in Mon­tréal, Cana­da next week: Why don’t you come and find us at the Annu­al Meet­ing of the Cog­ni­tive Neu­ro­science Soci­ety.

I will present a col­lab­o­ra­tive effort with old Kon­stanz acquain­tance Dr. Nathan Weisz on brain oscil­la­to­ry mea­sures in degrad­ed speech—a field I feel very strong­ly about cur­rent­ly and which will sure­ly keep me busy for years to come:

Poster D 53 — Spec­tral fea­tures of speech dri­ve the induced EEG brain response: Para­met­ric changes in Alpha- and Theta-band power

Also, our stu­dent Lars Mey­er will present a neat fMRI study we recent­ly ran on real­ly nasty (yet per­fect­ly legal) Ger­man syn­tax and how the brain deals with it under as-nasty (poor, that is) acoustics:

Poster I31When Com­plex Gram­mar Must Pass the Bot­tle­neck of Degrad­ed Acoustics: an fMRI Study.

See you in Montréal!