web analytics
Categories
Neural Oscillations Papers Publications Speech

New paper out: “Don’t be enslaved by the enve­lope” – Com­ment on Giraud & Poep­pel (2012)

Today appears a com­ment / opin­ion arti­cle, with a tad bit of fresh evi­dence from our lab, that is main­ly a reply to Anne-Lise Giraud and David Poeppel’s recent “per­spec­tive” arti­cle on Neur­al oscil­la­tions in speech.

We loved that arti­cle, obvi­ous­ly, but after the ini­tial excite­ment, a few con­cerns stuck with us. In essence, the prob­lems are (i) how to define theta for the pur­pos­es of analysing speech com­pre­hen­sion process­es, (ii) not to over­ly focus on the speech enve­lope (i.e., not to neglect spec­tral / fine-struc­ture aspects of speech), and (iii) the unsolved chicken–egg prob­lem of how neur­al entrain­ment and speech intel­li­gi­bil­i­ty real­ly relate to each other.

But read for your­self (It’s pleas­ant­ly short!).

Ref­er­ences

  • Obleser J, Her­rmann B, Hen­ry MJ. Neur­al Oscil­la­tions in Speech: Don’t be Enslaved by the Enve­lope. Front Hum Neu­rosci. 2012 Aug 31;6:250. PMID: 22969717. [Open with Read]
Categories
EEG / MEG Evoked Activity Linguistics Papers Perception Place of Articulation Features Publications Speech

New paper in press — Scharinger et al., PLOS ONE [Update]

We are hap­py that our paper

A Sparse Neur­al Code for Some Speech Sounds but Not for Others

is sched­uled for pub­li­ca­tion in PLOS ONE on July 16th, 2012.

This is also our first paper in col­lab­o­ra­tion with Alexan­dra Ben­dix­en from the Uni­ver­si­ty of Leipzig.

The research report­ed in this arti­cle pro­vides an exten­sion of the pre­dic­tive cod­ing frame­work onto speech sounds and assumes that audi­to­ry pro­cess­ing uses pre­dic­tions that are not only derived from ongo­ing con­tex­tu­al updates, but also from long-term mem­o­ry rep­re­sen­ta­tions — neur­al codes — of speech sounds. Using the Ger­man min­i­mal pair [lats]/[laks] (bib/salmon) in a pas­sive-odd­ball design, we find the expect­ed Mis­match Neg­a­tiv­i­ty (MMN) asym­me­try that is com­pat­i­ble with a pre­dic­tive cod­ing frame­work, but also with lin­guis­tic under­spec­i­fi­ca­tion theory.

[Update]

Paper is avail­able here.

Ref­er­ences

  • Scharinger M, Ben­dix­en A, Tru­jil­lo-Bar­reto NJ, Obleser J. A sparse neur­al code for some speech sounds but not for oth­ers. PLoS One. 2012;7(7):e40953. PMID: 22815876. [Open with Read]
Categories
Degraded Acoustics fMRI Noise-Vocoded Speech Papers Publications Speech

New paper in press: Erb et al., Neu­ropsy­cholo­gia [Update]

I am very proud to announce our first paper that was entire­ly planned, con­duct­ed, analysed and writ­ten up since our group has been in exis­tence. Julia joined me as the first PhD stu­dent in Decem­ber 2010, and has since been busy doing awe­some work. Check out her first paper!

Audi­to­ry skills and brain mor­phol­o­gy pre­dict indi­vid­ual dif­fer­ences in adap­ta­tion to degrad­ed speech

Noise-vocod­ed speech is a spec­tral­ly high­ly degrad­ed sig­nal, but it pre­serves the tem­po­ral enve­lope of speech. Lis­ten­ers vary con­sid­er­ably in their abil­i­ty to adapt to this degrad­ed speech sig­nal. Here, we hypoth­e­sized that indi­vid­ual dif­fer­ences in adap­ta­tion to vocod­ed speech should be pre­dictable by non-speech audi­to­ry, cog­ni­tive, and neu­roanatom­i­cal fac­tors. We test­ed eigh­teen nor­mal-hear­ing par­tic­i­pants in a short-term vocod­ed speech-learn­ing par­a­digm (lis­ten­ing to 100 4- band-vocod­ed sen­tences). Non-speech audi­to­ry skills were assessed using ampli­tude mod­u­la­tion (AM) rate dis­crim­i­na­tion, where mod­u­la­tion rates were cen­tered on the speech-rel­e­vant rate of 4 Hz. Work­ing mem­o­ry capac­i­ties were eval­u­at­ed, and struc­tur­al MRI scans were exam­ined for anatom­i­cal pre­dic­tors of vocod­ed speech learn­ing using vox­el-based mor­phom­e­try. Lis­ten­ers who learned faster to under­stand degrad­ed speech showed small­er thresh­olds in the AM dis­crim­i­na­tion task. Anatom­i­cal brain scans revealed that faster learn­ers had increased vol­ume in the left thal­a­mus (pul­v­inar). These results sug­gest that adap­ta­tion to vocod­ed speech ben­e­fits from indi­vid­ual AM dis­crim­i­na­tion skills. This abil­i­ty to adjust to degrad­ed speech is fur­ther­more reflect­ed anatom­i­cal­ly in an increased vol­ume in an area of the thal­a­mus, which is strong­ly con­nect­ed to the audi­to­ry and pre­frontal cor­tex. Thus, indi­vid­ual audi­to­ry skills that are not speech-spe­cif­ic and left thal­a­mus gray mat­ter vol­ume can pre­dict how quick­ly a lis­ten­er adapts to degrad­ed speech. Please be in touch with Julia Erb if you are inter­est­ed in a preprint as soon as we get hold of the final, type­set manuscript.

[Update#1]: Julia has also pub­lished a blog post on her work.

[Update#2] Paper is avail­able here.

Ref­er­ences

  • Erb J, Hen­ry MJ, Eis­ner F, Obleser J. Audi­to­ry skills and brain mor­phol­o­gy pre­dict indi­vid­ual dif­fer­ences in adap­ta­tion to degrad­ed speech. Neu­ropsy­cholo­gia. 2012 Jul;50(9):2154–64. PMID: 22609577. [Open with Read]
Categories
Editorial Notes Neural Oscillations Publications

New Pub­lic Sci­ence Weblog by our Max Planck Institute

I am per­son­al­ly not entire­ly con­vinced whether Weblogs will sur­vive as a tool for com­mu­ni­ca­tion. Notwith­stand­ing, I am sup­port­ing the idea of our Insti­tute togeth­er with Ger­man pub­lic sci­ence mag­a­zine “Spek­trum der Wis­senschaft” to start a new blog, enti­tled “Neu­rocog­ni­tion”. It’s host­ed at scilogs.eu and scilogs.de. I have the hon­our to serve as one of the staff writ­ers there, let’s see where this will take us.

For a start, I let go and wrote about my fas­ci­na­tion with brain oscil­la­tions. Please pre­tend at least to be sur­prised on this choice of topic!

Categories
Auditory Cortex Auditory Speech Processing fMRI Papers Publications Speech

New paper out: McGet­ti­gan et al., Neuropsychologia


Last years’s lab guest and long-time col­lab­o­ra­tor Car­olyn McGet­ti­gan has put out anoth­er one:

Speech com­pre­hen­sion aid­ed by mul­ti­ple modal­i­ties: Behav­iour­al and neur­al interactions

I had the plea­sure to be involved ini­tial­ly, when Car­olyn con­ceived a lot of this, and when things came togeth­er in the end. Car­olyn nice­ly demon­strates how vary­ing audio and visu­al clar­i­ty comes togeth­er with the seman­tic ben­e­fits a lis­ten­er can get from the famous Kalikow SPIN (speech in noise) sen­tences. The data high­light pos­te­ri­or STS and the fusiform gyrus as sites for con­ver­gence of audi­to­ry, visu­al and lin­guis­tic information.

Check it out!

Ref­er­ences

  • McGet­ti­gan C, Faulkn­er A, Altarel­li I, Obleser J, Baver­stock H, Scott SK. Speech com­pre­hen­sion aid­ed by mul­ti­ple modal­i­ties: behav­iour­al and neur­al inter­ac­tions. Neu­ropsy­cholo­gia. 2012 Apr;50(5):762–76. PMID: 22266262. [Open with Read]
Categories
Auditory Speech Processing Media Publications

3‑D ani­ma­tion of brain acti­va­tions illus­trates the idea of “upstream delegation”

Recent­ly, with a data set dat­ing back to my time in Angela Friederici’s depart­ment, we pro­posed the idea that audi­to­ry sig­nal degra­da­tion would affect the exact con­fig­u­ra­tion of activ­i­ty along the main pro­cess­ing streams of lan­guage, in the supe­ri­or tem­po­ral and infe­ri­or frontal cor­tex. We ten­ta­tive­ly coined this process “upstream del­e­ga­tion”: The acti­va­tions that were dri­ven by increas­ing syn­tac­tic demands, with the chal­lenge of decreas­ing sig­nal qual­i­ty com­ing on top, were all of a sud­den found more “upstream” from where we had locat­ed them with improv­ingsig­nal quality.

In a fas­ci­nat­ing and instruc­tive inter­ac­tive 3‑D ver­sion (Oh, this sound so 1990s but it’s true!) , you can now study and manip­u­late (in the lit­er­al, not the sci­en­tif­ic mis­con­duct-sense) this and var­i­ous oth­er find­ings from Angela’s lab your­self: Fire up Chrome or Fire­fox and Check it out here.
All of this is tak­en from a recent review by Angela [Friederi­ci, AD (2011) Phys­i­o­log­i­cal Reviews, 91(4), 1357–1392], where she lays out her cur­rent take on infe­ri­or frontal cor­tex, the tracts con­nect­ing to and from it, and its role in syn­tax pro­cess­ing. The funky 3‑D stuff is by Ralph Schu­rade. Don’t ask how long it took us to get all the coor­di­nates in place.
Categories
Auditory Working Memory Degraded Acoustics EEG / MEG Events Executive Functions Neural Oscillations Posters Publications

Fur­ther posters at SFN / Neu­ro­science 2011

In addi­tion to the excit­ing con­so­nan­tal mis­match neg­a­tiv­i­ty work Math­ias and Alexan­dra will be show­ing (TUESDAY AM ses­sion, posters UU10 and UU11), we will have the fol­low­ing posters this year. Come by!

Chris Petkov and I are show­ing our brand new data in the TUESDAY PM ses­sion, poster LL14.

I myself will be pre­sent­ing in the WEDNESDAY AM ses­sion, XX15 – more alpha oscil­la­tions in work­ing mem­o­ry under speech degradation.

Final­ly, I also have the plea­sure to be a co-author on Sarah Jessen’s, who is show­ing très cool mul­ti­modal inte­gra­tion data on voic­es and bod­ies under noisy con­di­tions in the WEDNESDAY PM ses­sion, XX15.

Categories
Auditory Perception EEG / MEG Events Evoked Activity Posters Publications Speech

Poster Pre­sen­ta­tions at SFN

There will be two poster pre­sen­ta­tions at SFN in Wash­ing­ton, DC., on the top­ic of audi­to­ry pre­dic­tions in speech per­cep­tion. The first poster, authored by Alexan­dra Ben­dix­en, Math­ias Scharinger, and Jonas Obleser, sum­ma­rizes as follows:

Speech sig­nals are often com­pro­mised by dis­rup­tions orig­i­nat­ing from exter­nal (e.g., mask­ing noise) or inter­nal (e.g., slug­gish artic­u­la­tion) sources. Speech com­pre­hen­sion thus entails detect­ing and replac­ing miss­ing infor­ma­tion based on pre­dic­tive and restora­tive mech­a­nisms. The nature of the under­ly­ing neur­al mech­a­nisms is not yet well under­stood. In the present study, we inves­ti­gat­ed the detec­tion of miss­ing infor­ma­tion by occa­sion­al­ly omit­ting the final con­so­nants of the Ger­man words “Lachs” (salmon) or “Latz” (bib), result­ing in the syl­la­ble “La” (no seman­tic mean­ing). In three dif­fer­ent con­di­tions, stim­u­lus pre­sen­ta­tion was set up so that sub­jects expect­ed only the word “Lachs” (con­di­tion 1), only the word “Latz” (con­di­tion 2), or the words “Lachs” or “Latz” with equal prob­a­bil­i­ty (con­di­tion 3). Thus essen­tial­ly, the final seg­ment was pre­dictable in con­di­tions 1 and 2, but unpre­dictable in con­di­tion 3. Stim­uli were pre­sent­ed out­side the focus of atten­tion while sub­jects were watch­ing a silent video. Brain respons­es were mea­sured with mul­ti-chan­nel elec­troen­cephalo­gram (EEG) record­ings. In all con­di­tions, an omis­sion response was elicit­ed from 125 to 165 ms after the expect­ed onset of the final seg­ment. The omis­sion response shared char­ac­ter­is­tics of the omis­sion mis­match neg­a­tiv­i­ty (MMN) with gen­er­a­tors in audi­to­ry cor­ti­cal areas. Crit­i­cal­ly, the omis­sion response was enhanced in ampli­tude in the two pre­dictable con­di­tions (1, 2) com­pared to the unpre­dictable con­di­tion (3). Vio­lat­ing a strong pre­dic­tion thus elicit­ed a more pro­nounced omis­sion response. Con­sis­tent with a pre­dic­tive cod­ing account, the pro­cess­ing of miss­ing lin­guis­tic infor­ma­tion appears to be mod­u­lat­ed by pre­dic­tive context.

The sec­ond poster looks at sim­i­lar mate­r­i­al, but con­trasts coro­nal [t] with dor­sal [k], yield­ing inter­est­ing asym­me­tries in MMN responses:

Research in audi­to­ry neu­ro­science has lead to a bet­ter under­stand­ing of the neur­al bases of speech per­cep­tion, but the rep­re­sen­ta­tion­al nature of speech sounds with­in words is still a mat­ter of debate. Elec­tro­phys­i­o­log­i­cal research on sin­gle speech sounds pro­vid­ed evi­dence for abstract rep­re­sen­ta­tion­al units that com­prise infor­ma­tion about both acoustic struc­ture and artic­u­la­tor con­fig­u­ra­tion (Phillips et al., 2000), there­by refer­ring to phono­log­i­cal cat­e­gories. Here, we test the pro­cess­ing of word-final con­so­nants dif­fer­ing in their place of artic­u­la­tion (coro­nal [ts] vs. dor­sal [ks]) and acoustic struc­ture, as seen in the time-vary­ing for­mant (res­o­nance) fre­quen­cies. The respec­tive con­so­nants dis­tin­guish between the Ger­man nouns Latz (bib) and Lachs (salmon), record­ed from a female native speak­er. Ini­tial con­so­nant-vow­el sequences were aver­aged across the two nouns in order to avoid coar­tic­u­la­to­ry cues before the release of the con­so­nants. Latz and Lachs served as stan­dard and deviant in a pas­sive odd­ball par­a­digm, while the EEG from 20 par­tic­i­pants was record­ed. The change from stan­dard [ts] to deviant [ks] and vice ver­sa was accom­pa­nied by a dis­cernible Mis­match Neg­a­tiv­i­ty (MMN) response (Näätä­nen et al., 2007). This response showed an intrigu­ing asym­me­try, as seen in a main effect con­di­tion (deviant Latz vs. deviant Lachs, F(1,1920) = 291.84, p < 0.001) of an omnibus mixed-effect mod­el. Cru­cial­ly, the MMN for the deviant Latz was on aver­age more neg­a­tive than the MMN for the deviant Lachs from 135 to 185 ms post deviance onset (p < 0.001). We inter­pret these find­ings as reflect­ing a dif­fer­ence in phono­log­i­cal speci­fici­ty: Fol­low­ing Eulitz and Lahiri, 2004, we assume coro­nal seg­ments ([ts]) to have less spe­cif­ic (‘fea­t­u­ral­ly under­spec­i­fied’) rep­re­sen­ta­tions than dor­sal seg­ments ([ks]). While in stan­dard posi­tion, Lachs acti­vat­ed a mem­o­ry trace with a more spe­cif­ic final con­so­nant for which the deviant pro­vid­ed a stronger mis­match than vice ver­sa, i.e. when Latz acti­vat­ed a mem­o­ry trace with a less spe­cif­ic final con­so­nant. Our results sup­port a mod­el of speech per­cep­tion where sen­so­ry infor­ma­tion is processed in terms of dis­crete units inde­pen­dent of high­er lex­i­cal prop­er­ties, as the asym­me­try can­not be explained by dif­fer­ences in lex­i­cal sur­face fre­quen­cies between Latz and Lachs (both log-fre­quen­cies of 0.69). We can also rule out a fre­quen­cy effect on the seg­men­tal lev­el. Thus, it appears that speech per­cep­tion involves a lev­el of pro­cess­ing where indi­vid­ual seg­men­tal rep­re­sen­ta­tions with­in words are evaluated.