Last week we had our first “Auditory Cognition” group summer BBQ. Research assistant Christoph Daube brought this amazing cake, sporting some anatomical knowledge and some serious patisserie skills. Thank you, Christoph!
I am very proud to announce our first paper that was entirely planned, conducted, analysed and written up since our group has been in existence. Julia joined me as the first PhD student in December 2010, and has since been busy doing awesome work. Check out her first paper!
Auditory skills and brain morphology predict individual differences in adaptation to degraded speech
Noise-vocoded speech is a spectrally highly degraded signal, but it preserves the temporal envelope of speech. Listeners vary considerably in their ability to adapt to this degraded speech signal. Here, we hypothesized that individual differences in adaptation to vocoded speech should be predictable by non-speech auditory, cognitive, and neuroanatomical factors. We tested eighteen normal-hearing participants in a short-term vocoded speech-learning paradigm (listening to 100 4- band-vocoded sentences). Non-speech auditory skills were assessed using amplitude modulation (AM) rate discrimination, where modulation rates were centered on the speech-relevant rate of 4 Hz. Working memory capacities were evaluated, and structural MRI scans were examined for anatomical predictors of vocoded speech learning using voxel-based morphometry. Listeners who learned faster to understand degraded speech showed smaller thresholds in the AM discrimination task. Anatomical brain scans revealed that faster learners had increased volume in the left thalamus (pulvinar). These results suggest that adaptation to vocoded speech benefits from individual AM discrimination skills. This ability to adjust to degraded speech is furthermore reflected anatomically in an increased volume in an area of the thalamus, which is strongly connected to the auditory and prefrontal cortex. Thus, individual auditory skills that are not speech-specific and left thalamus gray matter volume can predict how quickly a listener adapts to degraded speech. Please be in touch with Julia Erb if you are interested in a preprint as soon as we get hold of the final, typeset manuscript.
[Update#1]: Julia has also published a blog post on her work.
[Update#2] Paper is available here.
References
- Erb J, Henry MJ, Eisner F, Obleser J. Auditory skills and brain morphology predict individual differences in adaptation to degraded speech. Neuropsychologia. 2012 Jul;50(9):2154–64. PMID: 22609577. [Open with Read]
I am personally not entirely convinced whether Weblogs will survive as a tool for communication. Notwithstanding, I am supporting the idea of our Institute together with German public science magazine “Spektrum der Wissenschaft” to start a new blog, entitled “Neurocognition”. It’s hosted at scilogs.eu and scilogs.de. I have the honour to serve as one of the staff writers there, let’s see where this will take us.
For a start, I let go and wrote about my fascination with brain oscillations. Please pretend at least to be surprised on this choice of topic!
Last years’s lab guest and long-time collaborator Carolyn McGettigan has put out another one:
Speech comprehension aided by multiple modalities: Behavioural and neural interactions
I had the pleasure to be involved initially, when Carolyn conceived a lot of this, and when things came together in the end. Carolyn nicely demonstrates how varying audio and visual clarity comes together with the semantic benefits a listener can get from the famous Kalikow SPIN (speech in noise) sentences. The data highlight posterior STS and the fusiform gyrus as sites for convergence of auditory, visual and linguistic information.
References
- McGettigan C, Faulkner A, Altarelli I, Obleser J, Baverstock H, Scott SK. Speech comprehension aided by multiple modalities: behavioural and neural interactions. Neuropsychologia. 2012 Apr;50(5):762–76. PMID: 22266262. [Open with Read]
Recently, with a data set dating back to my time in Angela Friederici’s department, we proposed the idea that auditory signal degradation would affect the exact configuration of activity along the main processing streams of language, in the superior temporal and inferior frontal cortex. We tentatively coined this process “upstream delegation”: The activations that were driven by increasing syntactic demands, with the challenge of decreasing signal quality coming on top, were all of a sudden found more “upstream” from where we had located them with improvingsignal quality.
In addition to the exciting consonantal mismatch negativity work Mathias and Alexandra will be showing (TUESDAY AM session, posters UU10 and UU11), we will have the following posters this year. Come by!
Chris Petkov and I are showing our brand new data in the TUESDAY PM session, poster LL14.
I myself will be presenting in the WEDNESDAY AM session, XX15 – more alpha oscillations in working memory under speech degradation.
Finally, I also have the pleasure to be a co-author on Sarah Jessen’s, who is showing très cool multimodal integration data on voices and bodies under noisy conditions in the WEDNESDAY PM session, XX15.
There will be two poster presentations at SFN in Washington, DC., on the topic of auditory predictions in speech perception. The first poster, authored by Alexandra Bendixen, Mathias Scharinger, and Jonas Obleser, summarizes as follows:
Speech signals are often compromised by disruptions originating from external (e.g., masking noise) or internal (e.g., sluggish articulation) sources. Speech comprehension thus entails detecting and replacing missing information based on predictive and restorative mechanisms. The nature of the underlying neural mechanisms is not yet well understood. In the present study, we investigated the detection of missing information by occasionally omitting the final consonants of the German words “Lachs” (salmon) or “Latz” (bib), resulting in the syllable “La” (no semantic meaning). In three different conditions, stimulus presentation was set up so that subjects expected only the word “Lachs” (condition 1), only the word “Latz” (condition 2), or the words “Lachs” or “Latz” with equal probability (condition 3). Thus essentially, the final segment was predictable in conditions 1 and 2, but unpredictable in condition 3. Stimuli were presented outside the focus of attention while subjects were watching a silent video. Brain responses were measured with multi-channel electroencephalogram (EEG) recordings. In all conditions, an omission response was elicited from 125 to 165 ms after the expected onset of the final segment. The omission response shared characteristics of the omission mismatch negativity (MMN) with generators in auditory cortical areas. Critically, the omission response was enhanced in amplitude in the two predictable conditions (1, 2) compared to the unpredictable condition (3). Violating a strong prediction thus elicited a more pronounced omission response. Consistent with a predictive coding account, the processing of missing linguistic information appears to be modulated by predictive context.
The second poster looks at similar material, but contrasts coronal [t] with dorsal [k], yielding interesting asymmetries in MMN responses:
Research in auditory neuroscience has lead to a better understanding of the neural bases of speech perception, but the representational nature of speech sounds within words is still a matter of debate. Electrophysiological research on single speech sounds provided evidence for abstract representational units that comprise information about both acoustic structure and articulator configuration (Phillips et al., 2000), thereby referring to phonological categories. Here, we test the processing of word-final consonants differing in their place of articulation (coronal [ts] vs. dorsal [ks]) and acoustic structure, as seen in the time-varying formant (resonance) frequencies. The respective consonants distinguish between the German nouns Latz (bib) and Lachs (salmon), recorded from a female native speaker. Initial consonant-vowel sequences were averaged across the two nouns in order to avoid coarticulatory cues before the release of the consonants. Latz and Lachs served as standard and deviant in a passive oddball paradigm, while the EEG from 20 participants was recorded. The change from standard [ts] to deviant [ks] and vice versa was accompanied by a discernible Mismatch Negativity (MMN) response (Näätänen et al., 2007). This response showed an intriguing asymmetry, as seen in a main effect condition (deviant Latz vs. deviant Lachs, F(1,1920) = 291.84, p < 0.001) of an omnibus mixed-effect model. Crucially, the MMN for the deviant Latz was on average more negative than the MMN for the deviant Lachs from 135 to 185 ms post deviance onset (p < 0.001). We interpret these findings as reflecting a difference in phonological specificity: Following Eulitz and Lahiri, 2004, we assume coronal segments ([ts]) to have less specific (‘featurally underspecified’) representations than dorsal segments ([ks]). While in standard position, Lachs activated a memory trace with a more specific final consonant for which the deviant provided a stronger mismatch than vice versa, i.e. when Latz activated a memory trace with a less specific final consonant. Our results support a model of speech perception where sensory information is processed in terms of discrete units independent of higher lexical properties, as the asymmetry cannot be explained by differences in lexical surface frequencies between Latz and Lachs (both log-frequencies of 0.69). We can also rule out a frequency effect on the segmental level. Thus, it appears that speech perception involves a level of processing where individual segmental representations within words are evaluated.
We are happy to announce that our paper “Asymmetries in the processing of vowel height” will be appearing in the Journal of Speech, Language, & Hearing Research, authored by Philip Monahan, William Idsardi and Mathias Scharinger. A short summary is given below:
[Update]Purpose: Speech perception can be described as the transformation of continuous acoustic information into discrete memory representations. Therefore, research on neural representations of speech sounds is particularly important for a better understanding of this transformation. Speech perception models make specific assumptions regarding the representation of mid vowels (e.g., [
]) that are articulated with a neutral position in regard to height. One hypothesis is that their representation is less specific than the representation of vowels with a more specific position (e.g., [æ]).
Method: In a magnetoencephalography study, we tested the underspecification of mid vowel in American English. Using a mismatch negativity (MMN) paradigm, mid and low lax vowels ([
]/[æ]), and high and low lax vowels ([I]/[æ]), were opposed, and M100/N1 dipole source parameters as well as MMN latency and amplitude were examined.
Results: Larger MMNs occurred when the mid vowel [
] was a deviant to the standard [æ], a result consistent with less specific representations for mid vowels. MMNs of equal magnitude were elicited in the high–low comparison, consistent with more specific representations for both high and low vowels. M100 dipole locations support early vowel categorization on the basis of linguistically relevant acoustic–phonetic features.
Conclusion: We take our results to reflect an abstract long-term representation of vowels that do not include redundant specifications at very early stages of processing the speech signal. Moreover, the dipole locations indicate extraction of distinctive features and their mapping onto representationally faithful cortical locations (i.e., a feature map).
The paper is available here.
References
- Scharinger M, Monahan PJ, Idsardi WJ. Asymmetries in the processing of vowel height. J Speech Lang Hear Res. 2012 Jun;55(3):903–18. PMID: 22232394. [Open with Read]