web analytics
Categories
EEG / MEG Linguistics Neural Oscillations Papers Publications

New paper out: Dis­so­ci­a­tion of alpha and theta oscil­la­tions Strauß, Kotz, Scharinger, Obleser

We are very hap­py to announce that PhD stu­dent Antje Strauß got her paper

Alpha and theta brain oscil­la­tions index dis­so­cia­ble process­es in spo­ken word recognition

accept­ed at Neu­roIm­age. Con­grat­u­la­tions! Find her paper here.

See the Abstract
Slow neur­al oscil­la­tions (∼ 1–15 Hz) are thought to orches­trate the neur­al process­es of spo­ken lan­guage com­pre­hen­sion. How­ev­er, func­tion­al sub­di­vi­sions with­in this broad range of fre­quen­cies are dis­put­ed, with most stud­ies hypoth­e­siz­ing only about sin­gle fre­quen­cy bands. The present study uti­lizes an estab­lished par­a­digm of spo­ken word recog­ni­tion (lex­i­cal deci­sion) to test the hypoth­e­sis that with­in the slow neur­al oscil­la­to­ry fre­quen­cy range, dis­tinct func­tion­al sig­na­tures and cor­ti­cal net­works can be iden­ti­fied at least for theta- (∼ 3–7 Hz) and alpha-fre­quen­cies (∼ 8–12 Hz). Lis­ten­ers per­formed an audi­to­ry lex­i­cal deci­sion task on a set of items that formed a word–pseudoword con­tin­u­um: rang­ing from (1) real words over (2) ambigu­ous pseu­do­words (devi­at­ing from real words only in one vow­el; com­pa­ra­ble to nat­ur­al mis­pro­nun­ci­a­tions in speech) to (3) pseu­do­words (clear­ly devi­at­ing from real words by ran­dom­ized syl­la­bles). By means of time–frequency analy­sis and spa­tial fil­ter­ing, we observed a dis­so­ci­a­tion into dis­tinct but simul­ta­ne­ous pat­terns of alpha pow­er sup­pres­sion and theta pow­er enhance­ment. Alpha exhib­it­ed a para­met­ric sup­pres­sion as items increas­ing­ly matched real words,in line with low­ered func­tion­al inhi­bi­tion in a left-dom­i­nant lex­i­cal pro­cess­ing net­work for more word-like input. Simul­ta­ne­ous­ly, theta pow­er in a bilat­er­al fron­to-tem­po­ral net­work was selec­tive­ly enhanced for ambigu­ous pseu­do­words only. Thus, enhanced alpha pow­er can neu­ral­ly “gate” lex­i­cal inte­gra­tion, while enhanced theta pow­er might index func­tion­al­ly more spe­cif­ic ambi­gu­i­ty-res­o­lu­tion process­es. To this end, a joint analy­sis of both fre­quen­cy bands pro­vides neur­al evi­dence for par­al­lel process­es in achiev­ing spo­ken word recognition.

Ref­er­ences

  • Strauβ A1, Kotz SA2, Scharinger M3, Obleser J3. Alpha and theta brain oscil­la­tions index dis­so­cia­ble process­es in spo­ken word recog­ni­tion. Neu­roim­age. 2014 Apr 18. PMID: 24747736. [Open with Read]
Categories
EEG / MEG Evoked Activity Linguistics Papers Perception Place of Articulation Features Publications Speech

New paper in press — Scharinger et al., PLOS ONE [Update]

We are hap­py that our paper

A Sparse Neur­al Code for Some Speech Sounds but Not for Others

is sched­uled for pub­li­ca­tion in PLOS ONE on July 16th, 2012.

This is also our first paper in col­lab­o­ra­tion with Alexan­dra Ben­dix­en from the Uni­ver­si­ty of Leipzig.

The research report­ed in this arti­cle pro­vides an exten­sion of the pre­dic­tive cod­ing frame­work onto speech sounds and assumes that audi­to­ry pro­cess­ing uses pre­dic­tions that are not only derived from ongo­ing con­tex­tu­al updates, but also from long-term mem­o­ry rep­re­sen­ta­tions — neur­al codes — of speech sounds. Using the Ger­man min­i­mal pair [lats]/[laks] (bib/salmon) in a pas­sive-odd­ball design, we find the expect­ed Mis­match Neg­a­tiv­i­ty (MMN) asym­me­try that is com­pat­i­ble with a pre­dic­tive cod­ing frame­work, but also with lin­guis­tic under­spec­i­fi­ca­tion theory.

[Update]

Paper is avail­able here.

Ref­er­ences

  • Scharinger M, Ben­dix­en A, Tru­jil­lo-Bar­reto NJ, Obleser J. A sparse neur­al code for some speech sounds but not for oth­ers. PLoS One. 2012;7(7):e40953. PMID: 22815876. [Open with Read]
Categories
Auditory Perception Auditory Speech Processing EEG / MEG Evoked Activity Linguistics Papers Place of Articulation Features Publications Speech

New paper out in Jour­nal of Speech, Lan­guage, & Hear­ing Research [Update]

We are hap­py to announce that our paper “Asym­me­tries in the pro­cess­ing of vow­el height” will be appear­ing in the Jour­nal of Speech, Lan­guage, & Hear­ing Research, authored by Philip Mon­a­han, William Idsar­di and Math­ias Scharinger. A short sum­ma­ry is giv­en below:

Pur­pose: Speech per­cep­tion can be described as the trans­for­ma­tion of con­tin­u­ous acoustic infor­ma­tion into dis­crete mem­o­ry rep­re­sen­ta­tions. There­fore, research on neur­al rep­re­sen­ta­tions of speech sounds is par­tic­u­lar­ly impor­tant for a bet­ter under­stand­ing of this trans­for­ma­tion. Speech per­cep­tion mod­els make spe­cif­ic assump­tions regard­ing the rep­re­sen­ta­tion of mid vow­els (e.g., [{varepsilon}]) that are artic­u­lat­ed with a neu­tral posi­tion in regard to height. One hypoth­e­sis is that their rep­re­sen­ta­tion is less spe­cif­ic than the rep­re­sen­ta­tion of vow­els with a more spe­cif­ic posi­tion (e.g., [æ]).

Method: In a mag­ne­toen­cephalog­ra­phy study, we test­ed the under­spec­i­fi­ca­tion of mid vow­el in Amer­i­can Eng­lish. Using a mis­match neg­a­tiv­i­ty (MMN) par­a­digm, mid and low lax vow­els ([{varepsilon}]/[æ]), and high and low lax vow­els ([I]/[æ]), were opposed, and M100/N1 dipole source para­me­ters as well as MMN laten­cy and ampli­tude were examined.

Results: Larg­er MMNs occurred when the mid vow­el [{varepsilon}] was a deviant to the stan­dard [æ], a result con­sis­tent with less spe­cif­ic rep­re­sen­ta­tions for mid vow­els. MMNs of equal mag­ni­tude were elicit­ed in the high–low com­par­i­son, con­sis­tent with more spe­cif­ic rep­re­sen­ta­tions for both high and low vow­els. M100 dipole loca­tions sup­port ear­ly vow­el cat­e­go­riza­tion on the basis of lin­guis­ti­cal­ly rel­e­vant acoustic–phonetic features.

Con­clu­sion: We take our results to reflect an abstract long-term rep­re­sen­ta­tion of vow­els that do not include redun­dant spec­i­fi­ca­tions at very ear­ly stages of pro­cess­ing the speech sig­nal. More­over, the dipole loca­tions indi­cate extrac­tion of dis­tinc­tive fea­tures and their map­ping onto rep­re­sen­ta­tion­al­ly faith­ful cor­ti­cal loca­tions (i.e., a fea­ture map).

[Update]

The paper is avail­able here.

Ref­er­ences

  • Scharinger M, Mon­a­han PJ, Idsar­di WJ. Asym­me­tries in the pro­cess­ing of vow­el height. J Speech Lang Hear Res. 2012 Jun;55(3):903–18. PMID: 22232394. [Open with Read]
Categories
Auditory Cortex EEG / MEG Evoked Activity Linguistics Papers Publications

New Paper out: HELLO? in press (Neu­roIm­age)

Pho­net­ic cues instan­ta­neous­ly mapped onto dialec­tal cat­e­gories appear to be extract­ed at ear­ly moments in audi­to­ry speech per­cep­tion, as we try to show in our paper

You had me at “Hel­lo”: Rapid extrac­tion of dialect infor­ma­tion from spo­ken words

to appear in Neu­roIm­age (Math­ias Scharinger, Philip Mon­a­han, William Idsardi).

In a mod­i­fied pas­sive odd­ball design, we com­pare the Mis­match Neg­a­tiv­i­ty (MMN) to deviants in one Amer­i­can Eng­lish dialect (Stan­dard Amer­i­can Eng­lish or African-Amer­i­can Ver­nac­u­lar Eng­lish) to the stan­dards of the respec­tive oth­er dialect. In a con­trol con­di­tion, deviants with­in the same dialects have the same aver­aged acoustic dis­tance to their stan­dards than the cross-dialec­tal aver­aged acoustic dis­tance. Stan­dards and deviants were always spo­ken exem­plars of ‘Hel­lo’ in both dialects (ca. 500 ms). MMN effects are sig­nif­i­cant in the cross-dialec­tal con­di­tion only, imply­ing that a pure acoustic stan­dard-deviant dis­tance is not suf­fi­cient to elic­it sub­stan­tial mis­match effects. We inter­pret these find­ings, togeth­er with N1m source local­iza­tion data, as evi­dence for a rapid extrac­tion of dialect infor­ma­tion via salient acoustic-pho­net­ic cues. From the loca­tion and ori­en­ta­tion of the N1m source activ­i­ty, we can infer that dialect switch­es from stan­dards to deviants engage areas in supe­ri­or tem­po­ral sulcus/gyrus.

Ref­er­ences

  • Scharinger M, Mon­a­han PJ, Idsar­di WJ. You had me at “Hel­lo”: Rapid extrac­tion of dialect infor­ma­tion from spo­ken words. Neu­roim­age. 2011 Jun 15;56(4):2329–38. PMID: 21511041. [Open with Read]
Categories
Auditory Cortex EEG / MEG Evoked Activity Papers Place of Articulation Features Publications

New Paper out: Com­pre­hen­sive map of a language’s vow­el space

We are glad to announce that our paper (Math­ias Scharinger, Saman­tha Poe, & William Idsar­di) on cor­ti­cal rep­re­sen­ta­tions of Turk­ish vow­els is in press in Jour­nal of Cog­ni­tive Neu­ro­science. In this paper, we extend pre­vi­ous meth­ods of obtain­ing cen­ters of cor­ti­cal activ­i­ty evoked by vow­el exem­plars (e.g. Obleser et al., 2003, on Ger­man) and pro­vide an N1m ECD (Equiv­a­lent Cur­rent Dipole) map of the entire vow­el space of Turk­ish. Intrigu­ing­ly, ECD loca­tions mapped near­ly per­fect to loca­tions in F2/F1 space, although our mod­el com­par­i­son sug­gest­ed that inclu­sion of dis­crete fea­ture based pre­dic­tors for both loca­tions as well as col­lo­ca­tions of vow­els in audi­to­ry cor­tex improves the mod­el fits sub­stan­tial­ly. We dis­cuss the find­ings on the back­ground of neur­al cod­ing schemes for speech-relat­ed audi­to­ry categories.

Fig­ure 1: Loca­tions of Turk­ish vow­el stim­uli in acoustic space (F1,F2, top pan­el) and N1m ECD loca­tions in cor­ti­cal space (lat­er­al-medi­al/an­te­ri­or-pos­te­ri­or/in­fe­ri­or-supe­ri­or, bot­tom panel).

UPDATE: Paper is avail­able here.

Ref­er­ences

  • Scharinger M, Idsar­di WJ, Poe S. A com­pre­hen­sive three-dimen­sion­al cor­ti­cal map of vow­el space. J Cogn Neu­rosci. 2011 Dec;23(12):3972–82. PMID: 21568638. [Open with Read]
Categories
Auditory Neuroscience Auditory Perception fMRI Linguistics Papers Publications Speech

New paper out: “Upstream del­e­ga­tion” for pro­cess­ing of com­plex syn­tax under degrad­ed acoustics

A new paper is about to appear in Neu­roim­age on the inter­ac­tion of syn­tac­tic com­plex­i­ty and acoustic degradation.

It is writ­ten by myself, PhD stu­dent Lars Mey­er, and Angela Friederi­ci. In a way, the paper brings togeth­er one of Angela’s main research ques­tions (which brain cir­cuits medi­ate the pro­cess­ing of syn­tax?) with a long-stand­ing inter­est of mine, that is, how do adverse lis­ten­ing sit­u­a­tions affect the com­pre­hen­sion of speech.

The paper is entitled

Dynam­ic assign­ment of neur­al resources in audi­to­ry com­pre­hen­sion of com­plex sentences

The paper first estab­lish­es that acoustic vari­ants of increas­ing­ly com­plex sen­tences essen­tial­ly behave like writ­ten ver­sions of these sentences.
The data then neat­ly show that pro­cess­ing chal­leng­ing (but legal) syn­tax under var­i­ous lev­els of degra­da­tion has a very dif­fer­ent effect on the neur­al cir­cuits involved than prof­it­ing from seman­tics: While the lat­ter has been shown pre­vi­ous­ly to involve more wide­spread, het­ero­modal brain areas, the dou­ble demand of increas­ing­ly com­plex syn­tax and an increas­ing­ly degrad­ed speech sig­nal (from which the com­plex syn­tax has to be parsed) elic­it an “upstream” shift of acti­va­tion back to less abstract pro­cess­ing areas in the supe­ri­or tem­po­ral and prefrontal/frontal cortex.

We ten­ta­tive­ly have termed this process “upstream del­e­ga­tion”. We have also tried and estab­lished a slight­ly unusu­al method to do jus­tice to the fMRI acti­va­tion data: We have includ­ed all z‑scores gath­ered along cer­tain spa­tial dimen­sions, irre­spec­tive of whether they were sub- or suprathresh­old, and have treat­ed them as dis­tri­b­u­tions. Check it out and let us know what you think.

Ref­er­ences

  • Obleser J, Mey­er L, Friederi­ci AD. Dynam­ic assign­ment of neur­al resources in audi­to­ry com­pre­hen­sion of com­plex sen­tences. Neu­roim­age. 2011 Jun 15;56(4):2310–20. PMID: 21421059. [Open with Read]
Categories
Auditory Cortex Auditory Speech Processing EEG / MEG Evoked Activity Linguistics Papers Place of Articulation Features Publications

Paper in press: Are labi­als special?

This went online just a day before Christmas:

Neu­ro­mag­net­ic evi­dence for a fea­t­ur­al dis­tinc­tion of Eng­lish con­so­nants: Sen­sor- and source-space data

by Math­ias Scharinger, Jen­nifer Mer­ick­el, Joshua Riley, and William Idsardi
http://dx.doi.org/10.1016/j.bandl.2010.11.002

We want­ed to look at fea­t­ur­al (cat­e­gor­i­cal) place of artic­u­la­tion dis­tinc­tions in Eng­lish con­so­nants, and select­ed labi­al and coro­nal frica­tives and glides for an MMN study. In this study, we looked at sen­sor- and source-space effects of labi­al deviants pre­ced­ed by coro­nal stan­dards and coro­nal deviants pre­ced­ed by labi­al stan­dards, across the two man­ners of artic­u­la­tion, i.e. frica­tives and glides. Note that there are rather dra­mat­ic acoustic dif­fer­ences between these man­ners of artic­u­la­tion: uncor­re­lat­ed noise through nar­row con­stric­tion vs. vow­el-like sound with typ­i­cal res­o­nance fre­quen­cies. We found con­sis­tent place-of-artic­u­la­tion effects, inde­pen­dent of man­ner of artic­u­la­tion: labi­al deviants pro­duced larg­er MMN, con­tra a direc­tion­al hypoth­e­sis of under­spec­i­fi­ca­tion, and dipole source loca­tions fol­lowed the Obleser-gra­di­ent in that labi­als elicit­ed N1m dipoles ante­ri­or to dipoles of coro­nals in audi­to­ry cortex.

Ref­er­ences

  • Scharinger M, Mer­ick­el J, Riley J, Idsar­di WJ. Neu­ro­mag­net­ic evi­dence for a fea­t­ur­al dis­tinc­tion of Eng­lish con­so­nants: sen­sor- and source-space data. Brain Lang. 2011 Feb;116(2):71–82. PMID: 21185073. [Open with Read]
Categories
Auditory Neuroscience Auditory Speech Processing EEG / MEG Linguistics Papers Psychology Publications Speech

New paper out: Are ear­ly N100 and the late Gam­ma-band response neg­a­tive­ly cor­re­lat­ed in com­pre­hen­sion of degrad­ed speech?

Late 2010 was par­tic­u­lar­ly good to us:

Mul­ti­ple brain sig­na­tures of inte­gra­tion in the com­pre­hen­sion of degrad­ed speech

by Jonas Obleser and Son­ja Kotz, in Neu­roIm­age.

The final pdf will hope­ful­ly be avail­able online very soon. Mean­while the fig­ure below cap­tures our main results:

Ref­er­ences

  • Obleser J, Kotz SA. Mul­ti­ple brain sig­na­tures of inte­gra­tion in the com­pre­hen­sion of degrad­ed speech. Neu­roim­age. 2011 Mar 15;55(2):713–23. PMID: 21172443. [Open with Read]