web analytics
Categories
Auditory Cortex Auditory Neuroscience Auditory Working Memory Clinical relevance EEG / MEG Neural Oscillations Papers Publications Speech

New paper out: Alpha oscil­la­tions in audition

I am also delight­ed to report the fruits of a very recent col­lab­o­ra­tion with Nathan Weisz and his OBOB lab at the Uni­ver­si­ty of Kon­stanz, Germany.

Alpha Rhythms in Audi­tion: Cog­ni­tive and Clin­i­cal Perspectives

In this review paper, which appears in the new, excit­ing “Fron­tiers in Psy­chol­o­gy” jour­nal, we sum the recent evi­dence that alpha oscil­la­tions (here broad­ly defined from 6 to 13 Hz) are play­ing a very inter­est­ing role in the audi­to­ry sys­tem, just as they do in the visu­al and the somatosen­so­ry system.

In essence, we back Ole Jensen’s and oth­ers’ quite pari­mo­nious idea of alpha as a func­tion­al inhi­bi­tion / gat­ing sys­tem across cor­ti­cal areas.

From our own lab, pre­lim­i­nary data from two recent exper­i­ments is includ­ed: On the role of alpha osil­la­tions as a poten­tial mark­er for speech intel­li­gi­bil­i­ty and its acoustic deter­mi­nants, as well as on speech degra­da­tion and work­ing mem­o­ry load and their com­bined reflec­tion in alpha pow­er increases.

 

NB — the final pdf is still lack­ing, and Front Psy­chol is still not list­ed in PubMed. This should not stop you from sub­mit­ting to their excit­ing new jour­nals, as the review process is very fair and effi­cient and the out­reach via free avail­abil­i­ty promis­es to be considerable.

Ref­er­ences

  • Weisz N, Hart­mann T, Müller N, Lorenz I, Obleser J. Alpha rhythms in audi­tion: cog­ni­tive and clin­i­cal per­spec­tives. Front Psy­chol. 2011 Apr 26;2:73. PMID: 21687444. [Open with Read]
Categories
Auditory Neuroscience Auditory Working Memory Clinical relevance Degraded Acoustics Speech

What is it with degrad­ed speech and work­ing memory?

Upcom­ing mon­day, I will present in-house some of my recent rumi­nat­ing on the con­cept of “ver­bal” work­ing mem­o­ry and on-line speech com­pre­hen­sion. It is an ancient issue that received some atten­tion main­ly in the 1980s, in the light of Baddeley’s great (read: testable) work­ing mem­o­ry archi­tec­ture includ­ing the now famous phono­log­i­cal store or buffer.

Now, when we turn to degrad­ed speech (or, degrad­ed hear­ing, for that mat­ter) and want to under­stand how the brain can extract mean­ing from a degrad­ed sig­nal, the debate as to whether or not this requires work­ing mem­o­ry has to be revived.

My main con­cern is that the con­cept of a phono­log­i­cal store always relies on

rep­re­sen­ta­tions […] which […] must, rather, be post-cat­e­gor­i­cal, ‘cen­tral’ rep­re­sen­ta­tions that are func­tion­al­ly remote from more periph­er­al per­cep­tu­al or motoric sys­tems.

Indeed, the use of the term phono­log­i­cal seems to have been delib­er­ate­ly adopt­ed in favor of the terms acoustic or artic­u­la­to­ry (see, e.g., Bad­de­ley, 1992) to indi­cate the abstract nature of the phono­log­i­cal store’s unit of currency.’’

(Jones, Hugh­es, & Mack­en, 2006, p. 266; quot­ed after the worth­while paper by Pa et al.)

But how does the hear­ing sys­tem arrive at such an abstract rep­re­sen­ta­tion when the input is com­pro­mised and less than clear?

I think it all leads to an—at least—twofold under­stand­ing of “work­ing” mem­o­ry in acoustic and speech process­es, each with its own neur­al cor­re­lates, as they sur­face in any brain imag­ing study of lis­ten­ing to (degrad­ed) speech: A pre-cat­e­gor­i­cal, sen­so­ry-based sys­tem, prob­a­bly reflect­ed by acti­va­tions of the planum tem­po­rale that can be tied to com­pen­sato­ry and effort­ful attempts to process the speech signal—and a (more clas­si­cal) post-cat­e­gor­i­cal sys­tem not access­ing acoustic detail any longer and con­nect­ing to long-term mem­o­ry rep­re­sen­ta­tions (phono­log­i­cal and lex­i­cal cat­e­gories) instead.

Stay tuned for more of this.

Categories
Auditory Neuroscience Clinical relevance Editorial Notes Speech

Why will a per­son with a right-hemi­spher­ic stroke not become aphasic…

… if spec­tral (fine-fre­quen­cy) details of the speech sig­nal are “pre­dom­i­nant­ly tracked in the right audi­to­ry cor­tex”, Prof. Sophie Scott just right­ly asked after my talk fif­teen min­utes ago at SfN.

I am not sure what Robert Zatorre and David Poep­pel would answer, but I think that this is not an easy ques­tion and it can sure­ly not be answered based on the first exper­i­ment on spec­tral vs. tem­po­ral detail in speech that we just published. 

I would argue that it is open to thor­ough test­ing how patients with left or right tem­po­ral lobe lesions would cope with removed spec­tral and tem­po­ral detail, respectively.

I am glad that Sophie Scott some­what sug­gest­ed this, as I have been main­tain­ing for years the opin­ion that in lesioned patients, apha­sic or not, there is much to learn on fine-grad­ed, basic audi­to­ry processing—it is high­ly under­stand­able that, from a clin­i­cal point of view, patients have much more severe prob­lems in com­mu­ni­ca­tion that deserve our clin­i­cal atten­tion. Nev­er­the­less, thor­ough (behav­iour­al) test­ing of the audi­to­ry speech per­cep­tion in vol­un­teer­ing patients is a worth­while and time­ly effort.