web analytics
Categories
Auditory Neuroscience Auditory Working Memory Clinical relevance Degraded Acoustics Speech

What is it with degrad­ed speech and work­ing memory?

Upcom­ing mon­day, I will present in-house some of my recent rumi­nat­ing on the con­cept of “ver­bal” work­ing mem­o­ry and on-line speech com­pre­hen­sion. It is an ancient issue that received some atten­tion main­ly in the 1980s, in the light of Baddeley’s great (read: testable) work­ing mem­o­ry archi­tec­ture includ­ing the now famous phono­log­i­cal store or buffer.

Now, when we turn to degrad­ed speech (or, degrad­ed hear­ing, for that mat­ter) and want to under­stand how the brain can extract mean­ing from a degrad­ed sig­nal, the debate as to whether or not this requires work­ing mem­o­ry has to be revived.

My main con­cern is that the con­cept of a phono­log­i­cal store always relies on

rep­re­sen­ta­tions […] which […] must, rather, be post-cat­e­gor­i­cal, ‘cen­tral’ rep­re­sen­ta­tions that are func­tion­al­ly remote from more periph­er­al per­cep­tu­al or motoric sys­tems.

Indeed, the use of the term phono­log­i­cal seems to have been delib­er­ate­ly adopt­ed in favor of the terms acoustic or artic­u­la­to­ry (see, e.g., Bad­de­ley, 1992) to indi­cate the abstract nature of the phono­log­i­cal store’s unit of currency.’’

(Jones, Hugh­es, & Mack­en, 2006, p. 266; quot­ed after the worth­while paper by Pa et al.)

But how does the hear­ing sys­tem arrive at such an abstract rep­re­sen­ta­tion when the input is com­pro­mised and less than clear?

I think it all leads to an—at least—twofold under­stand­ing of “work­ing” mem­o­ry in acoustic and speech process­es, each with its own neur­al cor­re­lates, as they sur­face in any brain imag­ing study of lis­ten­ing to (degrad­ed) speech: A pre-cat­e­gor­i­cal, sen­so­ry-based sys­tem, prob­a­bly reflect­ed by acti­va­tions of the planum tem­po­rale that can be tied to com­pen­sato­ry and effort­ful attempts to process the speech signal—and a (more clas­si­cal) post-cat­e­gor­i­cal sys­tem not access­ing acoustic detail any longer and con­nect­ing to long-term mem­o­ry rep­re­sen­ta­tions (phono­log­i­cal and lex­i­cal cat­e­gories) instead.

Stay tuned for more of this.

Categories
Auditory Cortex Papers Publications

Obleser & Eis­ner in Trends Cogn Sci (in press) available

My year in sci­ence 2008 finds a sat­is­fy­ing end­ing by see­ing the fruits of my col­league Dr. Frank Eisner’s (cur­rent­ly ICN / UCL) and my own year­long efforts online.

Our opin­ion piece on how the prob­lem of pre-lex­i­cal abstrac­tion of speech in struc­tures of the audi­to­ry cor­tex should be best approached is final­ly avail­able as a beau­ti­ful and handy pre-print from Trends in Cog­ni­tive Sci­ences.

As a goody, I quote from the con­clu­sions rather than the open­ly avail­able abstract:

Behav­iour­al inves­ti­ga­tions in speech sci­ences and com­pu­ta­tion­al mod­el­ling have led to a detailed under­stand­ing of how the speech per­cep­tion sys­tem can be con­cep­tu­alised. While this type of research can­not by itself pro­duce a neu­roanatom­i­cal mod­el of speech pro­cess­ing, it should guide neu­ro­sci­en­tif­ic inves­ti­ga­tions by pro­vid­ing a the­o­ret­i­cal framework.

Using the cog­ni­tive sub­trac­tion method, func­tion­al neu­roimag­ing stud­ies have broad­ly defined the neu­roanato­my of pre-lex­i­cal pro­cess­ing. Mul­ti­vari­ate neu­roimag­ing tech­niques have the poten­tial to study spec­tro-tem­po­ral encod­ing and abstrac­tion of speech in more detail, and cru­cial­ly, in a man­ner that can be relat­ed to results from oth­er fields. […] We sug­gest that the out­put of these mul­ti­vari­ate meth­ods can serve as input to cog­ni­tive mod­els of speech per­cep­tion, in par­al­lel to behav­iour-based like­li­hoods that have been used in speech sci­ence, wave­form-based like­li­hoods that can be extract­ed with auto­mat­ic speech recog­ni­tion tech­niques, or spike-tim­ing pat­terns that have been observed in ani­mal studies.

The inte­gra­tion of find­ings from all of these areas, and the lat­est tech­no­log­i­cal devel­op­ments with­in each of them, can lead to a testable, neu­roanatom­i­cal mod­el of pre-lex­i­cal abstraction.’

Feel free to mail me for reprints.

Ref­er­ences

  • Obleser J, Eis­ner F. Pre-lex­i­cal abstrac­tion of speech in the audi­to­ry cor­tex. Trends Cogn Sci. 2009 Jan;13(1):14–9. PMID: 19070534. [Open with Read]
Categories
Auditory Neuroscience Clinical relevance Editorial Notes Speech

Why will a per­son with a right-hemi­spher­ic stroke not become aphasic…

… if spec­tral (fine-fre­quen­cy) details of the speech sig­nal are “pre­dom­i­nant­ly tracked in the right audi­to­ry cor­tex”, Prof. Sophie Scott just right­ly asked after my talk fif­teen min­utes ago at SfN.

I am not sure what Robert Zatorre and David Poep­pel would answer, but I think that this is not an easy ques­tion and it can sure­ly not be answered based on the first exper­i­ment on spec­tral vs. tem­po­ral detail in speech that we just published. 

I would argue that it is open to thor­ough test­ing how patients with left or right tem­po­ral lobe lesions would cope with removed spec­tral and tem­po­ral detail, respectively.

I am glad that Sophie Scott some­what sug­gest­ed this, as I have been main­tain­ing for years the opin­ion that in lesioned patients, apha­sic or not, there is much to learn on fine-grad­ed, basic audi­to­ry processing—it is high­ly under­stand­able that, from a clin­i­cal point of view, patients have much more severe prob­lems in com­mu­ni­ca­tion that deserve our clin­i­cal atten­tion. Nev­er­the­less, thor­ough (behav­iour­al) test­ing of the audi­to­ry speech per­cep­tion in vol­un­teer­ing patients is a worth­while and time­ly effort.

Categories
Auditory Neuroscience Auditory Speech Processing Degraded Acoustics Events fMRI Noise-Vocoded Speech Papers Publications

Talk at the Soci­ety for Neu­ro­science Meet­ing, Wash­ing­ton, DC on Wednesday

If you hap­pen to be at SfN this week, you might want to check out my short pre­sen­ta­tion on a recent study [1] we did: What do spec­tral (fre­quen­cy-domain) and tem­po­ral (time-domain) fea­tures real­ly con­tribute to speech com­pre­hen­sion process­es in the tem­po­ral lobes?

It is in the Audi­to­ry Cor­tex Ses­sion (710), tak­ing place in Room 145B. My talk is sched­uled for 0945 am.

[1] Obleser, J., Eis­ner, F., Kotz, S.A. (2008) Bilat­er­al speech com­pre­hen­sion reflects dif­fer­en­tial sen­si­tiv­i­ty to spec­tral and tem­po­ral fea­tures. Jour­nal of Neu­ro­science, 28(32):8116–8124.

Ref­er­ences

  • Obleser J, Eis­ner F, Kotz SA. Bilat­er­al speech com­pre­hen­sion reflects dif­fer­en­tial sen­si­tiv­i­ty to spec­tral and tem­po­ral fea­tures. J Neu­rosci. 2008 Aug 6;28(32):8116–23. PMID: 18685036. [Open with Read]
Categories
Editorial Notes

Kick-Off: Wel­come to the new Obleser lab weblog

Wel­come to this col­lec­tion of news, facts and mis­cel­lanea from the Jonas Obleser “Cogn­tive Neu­ro­science of Speech” head­quar­ters. Cur­rent­ly, these head­quar­ters are sit­u­at­ed with­in the fan­tas­tic sci­en­tif­ic facil­i­ties that the Max Planck Insti­tute for Human Cog­ni­tive and Brain Sci­ences Leipzig and Prof. Dr. Angela Friederi­ci provide.

Our work focus­es on how the human brain analy­ses, (de–)codes and repairs incom­ing speech sig­nals. Our stud­ies are firm­ly root­ed in audi­to­ry neu­ro­science, yet also incor­po­rate par­a­digms and research ques­tions that are more lin­guis­tic or psy­cho­log­i­cal at times—in order to grasp a more com­pre­hen­sive under­stand­ing of the human brain’s amaz­ing fac­ul­ty to per­ceive and com­pre­hend speech.

We use main­ly func­tion­al MRI to study the brain lis­ten­ing to (often degrad­ed) speech, but EEG, MEG and behav­iour­al stud­ies are as well part of the arsenal.

Thanks for drop­ping by, and stay tuned.