web analytics
Categories
Auditory Neuroscience Auditory Speech Processing EEG / MEG Linguistics Papers Psychology Publications Speech

New paper out: Are ear­ly N100 and the late Gam­ma-band response neg­a­tive­ly cor­re­lat­ed in com­pre­hen­sion of degrad­ed speech?

Late 2010 was par­tic­u­lar­ly good to us:

Mul­ti­ple brain sig­na­tures of inte­gra­tion in the com­pre­hen­sion of degrad­ed speech

by Jonas Obleser and Son­ja Kotz, in Neu­roIm­age.

The final pdf will hope­ful­ly be avail­able online very soon. Mean­while the fig­ure below cap­tures our main results:

Ref­er­ences

  • Obleser J, Kotz SA. Mul­ti­ple brain sig­na­tures of inte­gra­tion in the com­pre­hen­sion of degrad­ed speech. Neu­roim­age. 2011 Mar 15;55(2):713–23. PMID: 21172443. [Open with Read]
Categories
Auditory Cortex Auditory Neuroscience fMRI Linguistics Papers Publications Speech

New paper out: Pat­terns of vow­el and con­so­nant sensitivity

Dear fol­low­ers of the slow­ly emerg­ing Obleser lab,
I am glad to present to you a new paper that was pub­lished last week:

Seg­re­ga­tion of vow­els and con­so­nants in human audi­to­ry cor­tex: Evi­dence for dis­trib­uted hier­ar­chi­cal orga­ni­za­tion

by Jonas Obleser, Amber Leaver, John Van­Meter, and Josef P. Rauscheck­er, in Fron­tiers in Psy­chol­o­gy. It was sub­mit­ted to the new sec­tion of Audi­to­ry Cog­ni­tive Neu­ro­science and wil be one of the first papers to appear in this section.

The paper present evi­dence from a small-vox­el 3T study we scanned in George­town a few years ago that

  • nat­u­ral­ly coar­tic­u­lat­ed syl­la­bles like /de:/ or /gu:/ con­tain enough infor­ma­tion for a machine learn­ing algo­rithm to tell vow­el cat­e­gories (front vs back) from each oth­er, and also stop con­so­nant cat­e­gories (/d/ vs /g/) – across participants!
  • with a sur­pris­ing­ly sparse over­lap across sub­ar­eas of the supe­ri­or tem­po­ral cor­tex, how­ev­er and
  • data from the left ante­ri­or region of inter­est (defined as left and ante­ri­or of a prob­a­bilis­tic pri­ma­ry audi­to­ry cor­tex def­i­n­i­tion sen­su Rademach­er et al., 2001) appears par­tic­u­lar­ly “geared” towards these speech–from-speech classifications.

The paper was edit­ed by Mic­ah Mur­ray and received very con­struc­tive reviews from Elia Formisano and Lee Miller (a fea­ture of Fron­tiers jour­nals is to dis­close the peer review­ers after accep­tance; nice fea­ture, I think.)

The final pdf is avail­able online now, and it seems that the Pubmed list­ings for the Fron­tiers in psy­chol­o­gy jour­nal are about to hap­pen very soon.

Ref­er­ences

  • Obleser J, Leaver AM, Van­meter J, Rauscheck­er JP. Seg­re­ga­tion of vow­els and con­so­nants in human audi­to­ry cor­tex: evi­dence for dis­trib­uted hier­ar­chi­cal orga­ni­za­tion. Front Psy­chol. 2010 Dec 24;1:232. PMID: 21738513. [Open with Read]
Categories
Auditory Neuroscience Degraded Acoustics Editorial Notes Events fMRI Linguistics Posters Publications

Vis­it us at CNS

UPDATE — The Vol­cano ash that Island is kind­ly sup­ply­ing might pre­vent us from get­ting to Mon­tréal. Let’s see whether we make it until the poster ses­sion starts on Sun­day. But I am slight­ly pes­simistic on that.

 

I am cur­rent­ly quite busy with fin­ish­ing off loads of old data and prepar­ing new adven­tures in audi­to­ry neu­ro­science. Stay tuned for more!

Mean­while, if you have a few-hours stop-over in Mon­tréal, Cana­da next week: Why don’t you come and find us at the Annu­al Meet­ing of the Cog­ni­tive Neu­ro­science Soci­ety.

I will present a col­lab­o­ra­tive effort with old Kon­stanz acquain­tance Dr. Nathan Weisz on brain oscil­la­to­ry mea­sures in degrad­ed speech—a field I feel very strong­ly about cur­rent­ly and which will sure­ly keep me busy for years to come:

Poster D 53 — Spec­tral fea­tures of speech dri­ve the induced EEG brain response: Para­met­ric changes in Alpha- and Theta-band power

Also, our stu­dent Lars Mey­er will present a neat fMRI study we recent­ly ran on real­ly nasty (yet per­fect­ly legal) Ger­man syn­tax and how the brain deals with it under as-nasty (poor, that is) acoustics:

Poster I31When Com­plex Gram­mar Must Pass the Bot­tle­neck of Degrad­ed Acoustics: an fMRI Study.

See you in Montréal!

Categories
Auditory Neuroscience Degraded Acoustics Editorial Notes fMRI Linguistics Papers Publications Speech

New arti­cles

May I humbly point you to three new arti­cles I had the hon­our to be involved in recently.

First­ly, Chris Petkov, Nikos Logo­thetis and I have put togeth­er a very broad overview over what we think is the cur­rent take on pro­cess­ing streams of voice, speech and, more gen­er­al­ly, vocal­i­sa­tion input in pri­mates. It appears in THE NEUROSCIENTIST and is aimed at (sic) neu­ro­sci­en­tists who are not in the lan­guage and audi­tion field on an every­day basis. It goes back all the way to Wer­nicke and also owes a lot to the hard work on func­tion­al and anatom­i­cal path­ways in the pri­mate brain by peo­ple like Jon Kaas, Troy Hack­ett, Josef Rauscheck­er, or Jef­frey Schmahmann.

Sec­ond­ly, Angela Friederi­ci, Son­ja A. Kotz, Sophie Scott and myself have a new arti­cle in press in HUMAN BRAIN MAPPING where we have tried and dis­en­tan­gled the gram­mat­i­cal vio­la­tion effects in speech that Angela had observed ear­li­er in the ante­ri­or supe­ri­or tem­po­ral gyrus and the effects of speech intel­li­gi­bil­i­ty Sophie had clear­ly pin­point­ed in the sul­cus just below. When com­bin­ing these two manip­u­la­tions into one exper­i­men­tal frame­work, the results turned out sur­pris­ing­ly clear-cut! Also, an impor­tant find­ing on the side: While the acti­va­tions we observed are of course bilat­er­al, any kind of true inter­ac­tion of gram­mar and intel­li­gi­bil­i­ty were locat­ed in the left hemi­sphere (both in infe­ri­or frontal and in supe­ri­or tem­po­ral areas). Watch out here for the upcom­ing pre-print.

Final­ly, recent data by Son­ja Kotz and I have some­what scru­ti­nised the way I see the the inter­play of the ante­ri­or and pos­te­ri­or STS, as well as the IFG and, impor­tant­ly, the left angu­lar gyrus (see the fig­ure below show­ing the response behav­iour of the left angu­lar gyrus over var­i­ous lev­els of degra­da­tion as well as seman­tic expectan­cy, with pooled data from the cur­rent as well as a pre­vi­ous study in J Neu­rosci by Obleser et al., 2007). These data, on a fine-tuned cloze-prob­a­bil­i­ty manip­u­la­tion to sen­tences of vary­ing degra­da­tion are avail­able now in CEREBRAL CORTEX. Thanks for you inter­est, and let me know what you think.

 

Ref­er­ences

  • Petkov CI, Logo­thetis NK, Obleser J. Where are the human speech and voice regions, and do oth­er ani­mals have any­thing like them? Neu­ro­sci­en­tist. 2009 Oct;15(5):419–29. PMID: 19516047. [Open with Read]
  • Friederi­ci AD, Kotz SA, Scott SK, Obleser J. Dis­en­tan­gling syn­tax and intel­li­gi­bil­i­ty in audi­to­ry lan­guage com­pre­hen­sion. Hum Brain Mapp. 2010 Mar;31(3):448–57. PMID: 19718654. [Open with Read]
  • Obleser J, Kotz SA. Expectan­cy con­straints in degrad­ed speech mod­u­late the lan­guage com­pre­hen­sion net­work. Cereb Cor­tex. 2010 Mar;20(3):633–40. PMID: 19561061. [Open with Read]
Categories
Auditory Neuroscience Auditory Working Memory Clinical relevance Degraded Acoustics Speech

What is it with degrad­ed speech and work­ing memory?

Upcom­ing mon­day, I will present in-house some of my recent rumi­nat­ing on the con­cept of “ver­bal” work­ing mem­o­ry and on-line speech com­pre­hen­sion. It is an ancient issue that received some atten­tion main­ly in the 1980s, in the light of Baddeley’s great (read: testable) work­ing mem­o­ry archi­tec­ture includ­ing the now famous phono­log­i­cal store or buffer.

Now, when we turn to degrad­ed speech (or, degrad­ed hear­ing, for that mat­ter) and want to under­stand how the brain can extract mean­ing from a degrad­ed sig­nal, the debate as to whether or not this requires work­ing mem­o­ry has to be revived.

My main con­cern is that the con­cept of a phono­log­i­cal store always relies on

rep­re­sen­ta­tions […] which […] must, rather, be post-cat­e­gor­i­cal, ‘cen­tral’ rep­re­sen­ta­tions that are func­tion­al­ly remote from more periph­er­al per­cep­tu­al or motoric sys­tems.

Indeed, the use of the term phono­log­i­cal seems to have been delib­er­ate­ly adopt­ed in favor of the terms acoustic or artic­u­la­to­ry (see, e.g., Bad­de­ley, 1992) to indi­cate the abstract nature of the phono­log­i­cal store’s unit of currency.’’

(Jones, Hugh­es, & Mack­en, 2006, p. 266; quot­ed after the worth­while paper by Pa et al.)

But how does the hear­ing sys­tem arrive at such an abstract rep­re­sen­ta­tion when the input is com­pro­mised and less than clear?

I think it all leads to an—at least—twofold under­stand­ing of “work­ing” mem­o­ry in acoustic and speech process­es, each with its own neur­al cor­re­lates, as they sur­face in any brain imag­ing study of lis­ten­ing to (degrad­ed) speech: A pre-cat­e­gor­i­cal, sen­so­ry-based sys­tem, prob­a­bly reflect­ed by acti­va­tions of the planum tem­po­rale that can be tied to com­pen­sato­ry and effort­ful attempts to process the speech signal—and a (more clas­si­cal) post-cat­e­gor­i­cal sys­tem not access­ing acoustic detail any longer and con­nect­ing to long-term mem­o­ry rep­re­sen­ta­tions (phono­log­i­cal and lex­i­cal cat­e­gories) instead.

Stay tuned for more of this.

Categories
Auditory Neuroscience Clinical relevance Editorial Notes Speech

Why will a per­son with a right-hemi­spher­ic stroke not become aphasic…

… if spec­tral (fine-fre­quen­cy) details of the speech sig­nal are “pre­dom­i­nant­ly tracked in the right audi­to­ry cor­tex”, Prof. Sophie Scott just right­ly asked after my talk fif­teen min­utes ago at SfN.

I am not sure what Robert Zatorre and David Poep­pel would answer, but I think that this is not an easy ques­tion and it can sure­ly not be answered based on the first exper­i­ment on spec­tral vs. tem­po­ral detail in speech that we just published. 

I would argue that it is open to thor­ough test­ing how patients with left or right tem­po­ral lobe lesions would cope with removed spec­tral and tem­po­ral detail, respectively.

I am glad that Sophie Scott some­what sug­gest­ed this, as I have been main­tain­ing for years the opin­ion that in lesioned patients, apha­sic or not, there is much to learn on fine-grad­ed, basic audi­to­ry processing—it is high­ly under­stand­able that, from a clin­i­cal point of view, patients have much more severe prob­lems in com­mu­ni­ca­tion that deserve our clin­i­cal atten­tion. Nev­er­the­less, thor­ough (behav­iour­al) test­ing of the audi­to­ry speech per­cep­tion in vol­un­teer­ing patients is a worth­while and time­ly effort.

Categories
Auditory Neuroscience Auditory Speech Processing Degraded Acoustics Events fMRI Noise-Vocoded Speech Papers Publications

Talk at the Soci­ety for Neu­ro­science Meet­ing, Wash­ing­ton, DC on Wednesday

If you hap­pen to be at SfN this week, you might want to check out my short pre­sen­ta­tion on a recent study [1] we did: What do spec­tral (fre­quen­cy-domain) and tem­po­ral (time-domain) fea­tures real­ly con­tribute to speech com­pre­hen­sion process­es in the tem­po­ral lobes?

It is in the Audi­to­ry Cor­tex Ses­sion (710), tak­ing place in Room 145B. My talk is sched­uled for 0945 am.

[1] Obleser, J., Eis­ner, F., Kotz, S.A. (2008) Bilat­er­al speech com­pre­hen­sion reflects dif­fer­en­tial sen­si­tiv­i­ty to spec­tral and tem­po­ral fea­tures. Jour­nal of Neu­ro­science, 28(32):8116–8124.

Ref­er­ences

  • Obleser J, Eis­ner F, Kotz SA. Bilat­er­al speech com­pre­hen­sion reflects dif­fer­en­tial sen­si­tiv­i­ty to spec­tral and tem­po­ral fea­tures. J Neu­rosci. 2008 Aug 6;28(32):8116–23. PMID: 18685036. [Open with Read]