UPDATE — The Volcano ash that Island is kindly supplying might prevent us from getting to Montréal. Let’s see whether we make it until the poster session starts on Sunday. But I am slightly pessimistic on that.
I am currently quite busy with finishing off loads of old data and preparing new adventures in auditory neuroscience. Stay tuned for more!
Meanwhile, if you have a few-hours stop-over in Montréal, Canada next week: Why don’t you come and find us at the Annual Meeting of the Cognitive Neuroscience Society.
I will present a collaborative effort with old Konstanz acquaintance Dr. Nathan Weisz on brain oscillatory measures in degraded speech—a field I feel very strongly about currently and which will surely keep me busy for years to come:
Also, our student Lars Meyer will present a neat fMRI study we recently ran on really nasty (yet perfectly legal) German syntax and how the brain deals with it under as-nasty (poor, that is) acoustics:
See you in Montréal!
May I humbly point you to three new articles I had the honour to be involved in recently.
Firstly, Chris Petkov, Nikos Logothetis and I have put together a very broad overview over what we think is the current take on processing streams of voice, speech and, more generally, vocalisation input in primates. It appears in THE NEUROSCIENTIST and is aimed at (sic) neuroscientists who are not in the language and audition field on an everyday basis. It goes back all the way to Wernicke and also owes a lot to the hard work on functional and anatomical pathways in the primate brain by people like Jon Kaas, Troy Hackett, Josef Rauschecker, or Jeffrey Schmahmann.
Secondly, Angela Friederici, Sonja A. Kotz, Sophie Scott and myself have a new article in press in HUMAN BRAIN MAPPING where we have tried and disentangled the grammatical violation effects in speech that Angela had observed earlier in the anterior superior temporal gyrus and the effects of speech intelligibility Sophie had clearly pinpointed in the sulcus just below. When combining these two manipulations into one experimental framework, the results turned out surprisingly clear-cut! Also, an important finding on the side: While the activations we observed are of course bilateral, any kind of true interaction of grammar and intelligibility were located in the left hemisphere (both in inferior frontal and in superior temporal areas). Watch out here for the upcoming pre-print.
Finally, recent data by Sonja Kotz and I have somewhat scrutinised the way I see the the interplay of the anterior and posterior STS, as well as the IFG and, importantly, the left angular gyrus (see the figure below showing the response behaviour of the left angular gyrus over various levels of degradation as well as semantic expectancy, with pooled data from the current as well as a previous study in J Neurosci by Obleser et al., 2007). These data, on a fine-tuned cloze-probability manipulation to sentences of varying degradation are available now in CEREBRAL CORTEX. Thanks for you interest, and let me know what you think.
- Petkov CI, Logothetis NK, Obleser J. Where are the human speech and voice regions, and do other animals have anything like them? Neuroscientist. 2009 Oct;15(5):419–29. PMID: 19516047. [Open with Read]
- Friederici AD, Kotz SA, Scott SK, Obleser J. Disentangling syntax and intelligibility in auditory language comprehension. Hum Brain Mapp. 2010 Mar;31(3):448–57. PMID: 19718654. [Open with Read]
- Obleser J, Kotz SA. Expectancy constraints in degraded speech modulate the language comprehension network. Cereb Cortex. 2010 Mar;20(3):633–40. PMID: 19561061. [Open with Read]
Upcoming monday, I will present in-house some of my recent ruminating on the concept of “verbal” working memory and on-line speech comprehension. It is an ancient issue that received some attention mainly in the 1980s, in the light of Baddeley’s great (read: testable) working memory architecture including the now famous phonological store or buffer.
Now, when we turn to degraded speech (or, degraded hearing, for that matter) and want to understand how the brain can extract meaning from a degraded signal, the debate as to whether or not this requires working memory has to be revived.
My main concern is that the concept of a phonological store always relies on
“representations […] which […] must, rather, be post-categorical, ‘central’ representations that are functionally remote from more peripheral perceptual or motoric systems. Indeed, the use of the term phonological seems to have been deliberately adopted in favor of the terms acoustic or articulatory (see, e.g., Baddeley, 1992) to indicate the abstract nature of the phonological store’s unit of currency.’’
(Jones, Hughes, & Macken, 2006, p. 266; quoted after the worthwhile paper by Pa et al.)
But how does the hearing system arrive at such an abstract representation when the input is compromised and less than clear?
I think it all leads to an—at least—twofold understanding of “working” memory in acoustic and speech processes, each with its own neural correlates, as they surface in any brain imaging study of listening to (degraded) speech: A pre-categorical, sensory-based system, probably reflected by activations of the planum temporale that can be tied to compensatory and effortful attempts to process the speech signal—and a (more classical) post-categorical system not accessing acoustic detail any longer and connecting to long-term memory representations (phonological and lexical categories) instead.
Stay tuned for more of this.
My year in science 2008 finds a satisfying ending by seeing the fruits of my colleague Dr. Frank Eisner’s (currently ICN / UCL) and my own yearlong efforts online.
Our opinion piece on how the problem of pre-lexical abstraction of speech in structures of the auditory cortex should be best approached is finally available as a beautiful and handy pre-print from Trends in Cognitive Sciences.
As a goody, I quote from the conclusions rather than the openly available abstract:
‘Behavioural investigations in speech sciences and computational modelling have led to a detailed understanding of how the speech perception system can be conceptualised. While this type of research cannot by itself produce a neuroanatomical model of speech processing, it should guide neuroscientific investigations by providing a theoretical framework.
Using the cognitive subtraction method, functional neuroimaging studies have broadly defined the neuroanatomy of pre-lexical processing. Multivariate neuroimaging techniques have the potential to study spectro-temporal encoding and abstraction of speech in more detail, and crucially, in a manner that can be related to results from other fields. […] We suggest that the output of these multivariate methods can serve as input to cognitive models of speech perception, in parallel to behaviour-based likelihoods that have been used in speech science, waveform-based likelihoods that can be extracted with automatic speech recognition techniques, or spike-timing patterns that have been observed in animal studies.
The integration of findings from all of these areas, and the latest technological developments within each of them, can lead to a testable, neuroanatomical model of pre-lexical abstraction.’
Feel free to mail me for reprints.
… if spectral (fine-frequency) details of the speech signal are “predominantly tracked in the right auditory cortex”, Prof. Sophie Scott just rightly asked after my talk fifteen minutes ago at SfN.
I am not sure what Robert Zatorre and David Poeppel would answer, but I think that this is not an easy question and it can surely not be answered based on the first experiment on spectral vs. temporal detail in speech that we just published.
I would argue that it is open to thorough testing how patients with left or right temporal lobe lesions would cope with removed spectral and temporal detail, respectively.
I am glad that Sophie Scott somewhat suggested this, as I have been maintaining for years the opinion that in lesioned patients, aphasic or not, there is much to learn on fine-graded, basic auditory processing—it is highly understandable that, from a clinical point of view, patients have much more severe problems in communication that deserve our clinical attention. Nevertheless, thorough (behavioural) testing of the auditory speech perception in volunteering patients is a worthwhile and timely effort.
If you happen to be at SfN this week, you might want to check out my short presentation on a recent study  we did: What do spectral (frequency-domain) and temporal (time-domain) features really contribute to speech comprehension processes in the temporal lobes?
It is in the Auditory Cortex Session (710), taking place in Room 145B. My talk is scheduled for 0945 am. Obleser, J., Eisner, F., Kotz, S.A. (2008) Bilateral speech comprehension reflects differential sensitivity to spectral and temporal features. Journal of Neuroscience, 28(32):8116–8124.
Welcome to this collection of news, facts and miscellanea from the Jonas Obleser “Cogntive Neuroscience of Speech” headquarters. Currently, these headquarters are situated within the fantastic scientific facilities that the Max Planck Institute for Human Cognitive and Brain Sciences Leipzig and Prof. Dr. Angela Friederici provide.
Our work focuses on how the human brain analyses, (de–)codes and repairs incoming speech signals. Our studies are firmly rooted in auditory neuroscience, yet also incorporate paradigms and research questions that are more linguistic or psychological at times—in order to grasp a more comprehensive understanding of the human brain’s amazing faculty to perceive and comprehend speech.
We use mainly functional MRI to study the brain listening to (often degraded) speech, but EEG, MEG and behavioural studies are as well part of the arsenal.
Thanks for dropping by, and stay tuned.