web analytics
Categories
Ageing Auditory Neuroscience Auditory Speech Processing Clinical relevance Degraded Acoustics Executive Functions fMRI Hearing Loss Noise-Vocoded Speech Papers Publications Speech

New paper in press: Erb & Obleser, Fron­tiers in Sys­tems Neuroscience

Julia Erb just got accept­ed the third study of her PhD project,

Upreg­u­la­tion of cog­ni­tive con­trol net­works in old­er adults’ speech comprehension

It will appear in Fron­tiers in Sys­tems Neu­ro­science soon.

The data are an exten­sion (in old­er adults) of Julia’s Jour­nal of Neu­ro­science paper ear­li­er this year.

Ref­er­ences

  • Erb J, Obleser J. Upreg­u­la­tion of cog­ni­tive con­trol net­works in old­er adults’ speech com­pre­hen­sion. Front Syst Neu­rosci. 2013 Dec 24;7:116. PMID: 24399939. [Open with Read]
Categories
Auditory Cortex Auditory Neuroscience Auditory Perception Auditory Speech Processing EEG / MEG Neural Oscillations Neural Phase Papers Publications

New paper in press: Hen­ry & Obleser, PLOS ONE [Update]

Watch this space and the PLOS ONE web­site for a forth­com­ing arti­cle by Mol­ly Hen­ry and me;

Dis­so­cia­ble neur­al response sig­na­tures for slow ampli­tude and fre­quen­cy mod­u­la­tion in human audi­to­ry cortex

Hark­ing back at what we had argued ini­tial­ly in our 2012 Fron­tiers op’ed piece (togeth­er with Björn Her­rmann), Mol­ly presents neat evi­dence for dis­so­cia­ble cor­ti­cal sig­na­tures of slow ampli­tude ver­sus fre­quen­cy mod­u­la­tion. These cor­ti­cal sig­na­tures poten­tial­ly pro­vide an effi­cient means to dis­sect simul­ta­ne­ous­ly com­mu­ni­cat­ed slow tem­po­ral and spec­tral infor­ma­tion in acoustic com­mu­ni­ca­tion signals.

[Update]

Paper is avail­able here.

Ref­er­ences

  • Hen­ry MJ, Obleser J. Dis­so­cia­ble neur­al response sig­na­tures for slow ampli­tude and fre­quen­cy mod­u­la­tion in human audi­to­ry cor­tex. PLoS One. 2013 Oct 29;8(10):e78758. PMID: 24205309. [Open with Read]
Categories
Auditory Cortex Auditory Neuroscience Auditory Perception Auditory Speech Processing Degraded Acoustics Executive Functions fMRI Noise-Vocoded Speech Papers Perception Publications Speech

New paper out: Erb, Hen­ry, Eis­ner & Obleser — Jour­nal of Neuroscience

We are proud to announce that PhD stu­dent Julia Erb just came out with a paper issued in Jour­nal  of Neu­ro­science:

The Brain Dynam­ics of Rapid Per­cep­tu­al Adap­ta­tion to Adverse Lis­ten­ing Conditions

Effects of adaptation

Grab it here:

Abstract:

Lis­ten­ers show a remark­able abil­i­ty to quick­ly adjust to degrad­ed speech input. Here, we aimed to iden­ti­fy the neur­al mech­a­nisms of such short-term per­cep­tu­al adap­ta­tion. In a sparse-sam­pling, car­diac-gat­ed func­tion­al mag­net­ic res­o­nance imag­ing (fMRI) acqui­si­tion, human lis­ten­ers heard and repeat­ed back 4‑band-vocod­ed sentences 

Ref­er­ences

  • Erb J, Hen­ry MJ, Eis­ner F, Obleser J. The brain dynam­ics of rapid per­cep­tu­al adap­ta­tion to adverse lis­ten­ing con­di­tions. J Neu­rosci. 2013 Jun 26;33(26):10688–97. PMID: 23804092. [Open with Read]
Categories
Auditory Neuroscience Auditory Speech Processing EEG / MEG Media Neural Oscillations Publications Speech

DRa­dio broad­cast­ed three fea­tures on neur­al oscil­la­tions (Hen­ry & Obleser)

Ger­man radio broad­cast­er Deutsch­landra­dio pro­duced three recent reports on neur­al oscil­la­tions and our recent PNAS paper. You can lis­ten to/read (in Ger­man lan­guage) them here:

Next time we’ll post before the broad­cast­ing takes place…

Ref­er­ences

  • Hen­ry MJ, Obleser J. Fre­quen­cy mod­u­la­tion entrains slow neur­al oscil­la­tions and opti­mizes human lis­ten­ing behav­ior. Proc Natl Acad Sci U S A. 2012 Dec 4;109(49):20095–100. PMID: 23151506. [Open with Read]
Categories
Auditory Neuroscience Auditory Speech Processing EEG / MEG Neural Oscillations Neural Phase Papers Publications Speech

New Paper in PNAS: Hen­ry & Obleser [Updat­ed]

Our new paper on neur­al entrain­ment with spec­tral fluc­tu­a­tions, and its effects on near-thresh­old audi­to­ry per­cep­tion is now online in the “ear­ly edi­tion” of PNAS:

Hen­ry, MJ & Obleser, J (in press):

Fre­quen­cy mod­u­la­tion entrains slow neur­al oscil­la­tions and opti­mizes human lis­ten­ing behavior

Pro­ceed­ings of the Nation­al Acad­e­my of Sci­ences of the Unit­ed States of Amer­i­ca (PNAS)


Here is the abstract:

The human abil­i­ty to con­tin­u­ous­ly track dynam­ic envi­ron­men­tal stim­uli, in par­tic­u­lar speech, is pro­posed to prof­it from “entrain­ment” of endoge­nous neur­al oscil­la­tions, which involves phase reor­ga­ni­za­tion such that “opti­mal” phase comes into line with tem­po­ral­ly expect­ed crit­i­cal events, result­ing in improved pro­cess­ing. The cur­rent exper­i­ment goes beyond pre­vi­ous work in this domain by address­ing two thus far unan­swered ques­tions. First, how gen­er­al is neur­al entrain­ment to envi­ron­men­tal rhythms: Can neur­al oscil­la­tions be entrained by tem­po­ral dynam­ics of ongo­ing rhyth­mic stim­uli with­out abrupt onsets? Sec­ond, does neur­al entrain­ment opti­mize per­for­mance of the per­cep­tu­al sys­tem: Does human audi­to­ry per­cep­tion ben­e­fit from neur­al phase reor­ga­ni­za­tion? In a human elec­troen­cephalog­ra­phy study, lis­ten­ers detect­ed short gaps dis­trib­uted uni­form­ly with respect to the phase angle of a 3‑Hz fre­quen­cy-mod­u­lat­ed stim­u­lus. Lis­ten­ers’ abil­i­ty to detect gaps in the fre­quen­cy-mod­u­lat­ed sound was not uni­form­ly dis­trib­uted in time, but clus­tered in cer­tain pre­ferred phas­es of the mod­u­la­tion. More­over, the opti­mal stim­u­lus phase was indi­vid­u­al­ly deter­mined by the neur­al delta oscil­la­tion entrained by the stim­u­lus. Final­ly, delta phase pre­dict­ed behav­ior bet­ter than stim­u­lus phase or the event-relat­ed poten­tial after the gap. This study demon­strates behav­ioral ben­e­fits of phase realign­ment in response to fre­quen­cy-mod­u­lat­ed audi­to­ry stim­uli, over­all sug­gest­ing that fre­quen­cy fluc­tu­a­tions in nat­ur­al envi­ron­men­tal input pro­vide a pac­ing sig­nal for endoge­nous neur­al oscil­la­tions, there­by influ­enc­ing per­cep­tu­al processing.

NB: There is also a press release by the Max Planck Soci­ety on the topic.

Ref­er­ences

  • Hen­ry MJ, Obleser J. Fre­quen­cy mod­u­la­tion entrains slow neur­al oscil­la­tions and opti­mizes human lis­ten­ing behav­ior. Proc Natl Acad Sci U S A. 2012 Dec 4;109(49):20095–100. PMID: 23151506. [Open with Read]
Categories
Auditory Cortex Auditory Speech Processing fMRI Papers Publications Speech

New paper out: McGet­ti­gan et al., Neuropsychologia


Last years’s lab guest and long-time col­lab­o­ra­tor Car­olyn McGet­ti­gan has put out anoth­er one:

Speech com­pre­hen­sion aid­ed by mul­ti­ple modal­i­ties: Behav­iour­al and neur­al interactions

I had the plea­sure to be involved ini­tial­ly, when Car­olyn con­ceived a lot of this, and when things came togeth­er in the end. Car­olyn nice­ly demon­strates how vary­ing audio and visu­al clar­i­ty comes togeth­er with the seman­tic ben­e­fits a lis­ten­er can get from the famous Kalikow SPIN (speech in noise) sen­tences. The data high­light pos­te­ri­or STS and the fusiform gyrus as sites for con­ver­gence of audi­to­ry, visu­al and lin­guis­tic information.

Check it out!

Ref­er­ences

  • McGet­ti­gan C, Faulkn­er A, Altarel­li I, Obleser J, Baver­stock H, Scott SK. Speech com­pre­hen­sion aid­ed by mul­ti­ple modal­i­ties: behav­iour­al and neur­al inter­ac­tions. Neu­ropsy­cholo­gia. 2012 Apr;50(5):762–76. PMID: 22266262. [Open with Read]
Categories
Auditory Speech Processing Media Publications

3‑D ani­ma­tion of brain acti­va­tions illus­trates the idea of “upstream delegation”

Recent­ly, with a data set dat­ing back to my time in Angela Friederici’s depart­ment, we pro­posed the idea that audi­to­ry sig­nal degra­da­tion would affect the exact con­fig­u­ra­tion of activ­i­ty along the main pro­cess­ing streams of lan­guage, in the supe­ri­or tem­po­ral and infe­ri­or frontal cor­tex. We ten­ta­tive­ly coined this process “upstream del­e­ga­tion”: The acti­va­tions that were dri­ven by increas­ing syn­tac­tic demands, with the chal­lenge of decreas­ing sig­nal qual­i­ty com­ing on top, were all of a sud­den found more “upstream” from where we had locat­ed them with improv­ingsig­nal quality.

In a fas­ci­nat­ing and instruc­tive inter­ac­tive 3‑D ver­sion (Oh, this sound so 1990s but it’s true!) , you can now study and manip­u­late (in the lit­er­al, not the sci­en­tif­ic mis­con­duct-sense) this and var­i­ous oth­er find­ings from Angela’s lab your­self: Fire up Chrome or Fire­fox and Check it out here.
All of this is tak­en from a recent review by Angela [Friederi­ci, AD (2011) Phys­i­o­log­i­cal Reviews, 91(4), 1357–1392], where she lays out her cur­rent take on infe­ri­or frontal cor­tex, the tracts con­nect­ing to and from it, and its role in syn­tax pro­cess­ing. The funky 3‑D stuff is by Ralph Schu­rade. Don’t ask how long it took us to get all the coor­di­nates in place.
Categories
Auditory Perception Auditory Speech Processing EEG / MEG Evoked Activity Linguistics Papers Place of Articulation Features Publications Speech

New paper out in Jour­nal of Speech, Lan­guage, & Hear­ing Research [Update]

We are hap­py to announce that our paper “Asym­me­tries in the pro­cess­ing of vow­el height” will be appear­ing in the Jour­nal of Speech, Lan­guage, & Hear­ing Research, authored by Philip Mon­a­han, William Idsar­di and Math­ias Scharinger. A short sum­ma­ry is giv­en below:

Pur­pose: Speech per­cep­tion can be described as the trans­for­ma­tion of con­tin­u­ous acoustic infor­ma­tion into dis­crete mem­o­ry rep­re­sen­ta­tions. There­fore, research on neur­al rep­re­sen­ta­tions of speech sounds is par­tic­u­lar­ly impor­tant for a bet­ter under­stand­ing of this trans­for­ma­tion. Speech per­cep­tion mod­els make spe­cif­ic assump­tions regard­ing the rep­re­sen­ta­tion of mid vow­els (e.g., [{varepsilon}]) that are artic­u­lat­ed with a neu­tral posi­tion in regard to height. One hypoth­e­sis is that their rep­re­sen­ta­tion is less spe­cif­ic than the rep­re­sen­ta­tion of vow­els with a more spe­cif­ic posi­tion (e.g., [æ]).

Method: In a mag­ne­toen­cephalog­ra­phy study, we test­ed the under­spec­i­fi­ca­tion of mid vow­el in Amer­i­can Eng­lish. Using a mis­match neg­a­tiv­i­ty (MMN) par­a­digm, mid and low lax vow­els ([{varepsilon}]/[æ]), and high and low lax vow­els ([I]/[æ]), were opposed, and M100/N1 dipole source para­me­ters as well as MMN laten­cy and ampli­tude were examined.

Results: Larg­er MMNs occurred when the mid vow­el [{varepsilon}] was a deviant to the stan­dard [æ], a result con­sis­tent with less spe­cif­ic rep­re­sen­ta­tions for mid vow­els. MMNs of equal mag­ni­tude were elicit­ed in the high–low com­par­i­son, con­sis­tent with more spe­cif­ic rep­re­sen­ta­tions for both high and low vow­els. M100 dipole loca­tions sup­port ear­ly vow­el cat­e­go­riza­tion on the basis of lin­guis­ti­cal­ly rel­e­vant acoustic–phonetic features.

Con­clu­sion: We take our results to reflect an abstract long-term rep­re­sen­ta­tion of vow­els that do not include redun­dant spec­i­fi­ca­tions at very ear­ly stages of pro­cess­ing the speech sig­nal. More­over, the dipole loca­tions indi­cate extrac­tion of dis­tinc­tive fea­tures and their map­ping onto rep­re­sen­ta­tion­al­ly faith­ful cor­ti­cal loca­tions (i.e., a fea­ture map).

[Update]

The paper is avail­able here.

Ref­er­ences

  • Scharinger M, Mon­a­han PJ, Idsar­di WJ. Asym­me­tries in the pro­cess­ing of vow­el height. J Speech Lang Hear Res. 2012 Jun;55(3):903–18. PMID: 22232394. [Open with Read]