Upcoming monday, I will present in-house some of my recent ruminating on the concept of “verbal” working memory and on-line speech comprehension. It is an ancient issue that received some attention mainly in the 1980s, in the light of Baddeley’s great (read: testable) working memory architecture including the now famous phonological store or buffer.
Now, when we turn to degraded speech (or, degraded hearing, for that matter) and want to understand how the brain can extract meaning from a degraded signal, the debate as to whether or not this requires working memory has to be revived.
My main concern is that the concept of a phonological store always relies on
“representations […] which […] must, rather, be post-categorical, ‘central’ representations that are functionally remote from more peripheral perceptual or motoric systems. Indeed, the use of the term phonological seems to have been deliberately adopted in favor of the terms acoustic or articulatory (see, e.g., Baddeley, 1992) to indicate the abstract nature of the phonological store’s unit of currency.’’
(Jones, Hughes, & Macken, 2006, p. 266; quoted after the worthwhile paper by Pa et al.)
But how does the hearing system arrive at such an abstract representation when the input is compromised and less than clear?
I think it all leads to an—at least—twofold understanding of “working” memory in acoustic and speech processes, each with its own neural correlates, as they surface in any brain imaging study of listening to (degraded) speech: A pre-categorical, sensory-based system, probably reflected by activations of the planum temporale that can be tied to compensatory and effortful attempts to process the speech signal—and a (more classical) post-categorical system not accessing acoustic detail any longer and connecting to long-term memory representations (phonological and lexical categories) instead.
Stay tuned for more of this.