web analytics
Categories
Auditory Neuroscience Auditory Perception Degraded Acoustics Executive Functions Papers Publications

Lis­ten­ing: The strat­e­gy mat­ters [Update]

In press on Neuropsychologia

Thal­a­m­ic and pari­etal brain mor­phol­o­gy pre­dicts audi­to­ry cat­e­go­ry learning

figure_mat

Cat­e­go­riz­ing sounds is vital for adap­tive human behav­ior. Accord­ing­ly, chang­ing lis­ten­ing sit­u­a­tions (exter­nal noise, but also periph­er­al hear­ing loss in aging) require lis­ten­ers to flex­i­bly adjust their cat­e­go­riza­tion strate­gies, e.g., switch amongst avail­able acoustic cues. How­ev­er, lis­ten­ers dif­fer con­sid­er­ably in these adap­tive capa­bil­i­ties. For this rea­son, we employed vox­el-based mor­phom­e­try (VBM) in our study (Neu­ropsy­cholo­gia, In press), in order to assess the degree to which indi­vid­ual brain mor­phol­o­gy is pre­dic­tive of such adap­tive lis­ten­ing behavior.

Ref­er­ences

  • Scharinger M1, Hen­ry MJ2, Erb J2, Mey­er L3, Obleser J2. Thal­a­m­ic and pari­etal brain mor­phol­o­gy pre­dicts audi­to­ry cat­e­go­ry learn­ing. Neu­ropsy­cholo­gia. 2014 Jan;53:75–83. PMID: 24035788. [Open with Read]
Categories
Degraded Acoustics fMRI Noise-Vocoded Speech Papers Publications Speech

New paper in press: Erb et al., Neu­ropsy­cholo­gia [Update]

I am very proud to announce our first paper that was entire­ly planned, con­duct­ed, analysed and writ­ten up since our group has been in exis­tence. Julia joined me as the first PhD stu­dent in Decem­ber 2010, and has since been busy doing awe­some work. Check out her first paper!

Audi­to­ry skills and brain mor­phol­o­gy pre­dict indi­vid­ual dif­fer­ences in adap­ta­tion to degrad­ed speech

Noise-vocod­ed speech is a spec­tral­ly high­ly degrad­ed sig­nal, but it pre­serves the tem­po­ral enve­lope of speech. Lis­ten­ers vary con­sid­er­ably in their abil­i­ty to adapt to this degrad­ed speech sig­nal. Here, we hypoth­e­sized that indi­vid­ual dif­fer­ences in adap­ta­tion to vocod­ed speech should be pre­dictable by non-speech audi­to­ry, cog­ni­tive, and neu­roanatom­i­cal fac­tors. We test­ed eigh­teen nor­mal-hear­ing par­tic­i­pants in a short-term vocod­ed speech-learn­ing par­a­digm (lis­ten­ing to 100 4- band-vocod­ed sen­tences). Non-speech audi­to­ry skills were assessed using ampli­tude mod­u­la­tion (AM) rate dis­crim­i­na­tion, where mod­u­la­tion rates were cen­tered on the speech-rel­e­vant rate of 4 Hz. Work­ing mem­o­ry capac­i­ties were eval­u­at­ed, and struc­tur­al MRI scans were exam­ined for anatom­i­cal pre­dic­tors of vocod­ed speech learn­ing using vox­el-based mor­phom­e­try. Lis­ten­ers who learned faster to under­stand degrad­ed speech showed small­er thresh­olds in the AM dis­crim­i­na­tion task. Anatom­i­cal brain scans revealed that faster learn­ers had increased vol­ume in the left thal­a­mus (pul­v­inar). These results sug­gest that adap­ta­tion to vocod­ed speech ben­e­fits from indi­vid­ual AM dis­crim­i­na­tion skills. This abil­i­ty to adjust to degrad­ed speech is fur­ther­more reflect­ed anatom­i­cal­ly in an increased vol­ume in an area of the thal­a­mus, which is strong­ly con­nect­ed to the audi­to­ry and pre­frontal cor­tex. Thus, indi­vid­ual audi­to­ry skills that are not speech-spe­cif­ic and left thal­a­mus gray mat­ter vol­ume can pre­dict how quick­ly a lis­ten­er adapts to degrad­ed speech. Please be in touch with Julia Erb if you are inter­est­ed in a preprint as soon as we get hold of the final, type­set manuscript.

[Update#1]: Julia has also pub­lished a blog post on her work.

[Update#2] Paper is avail­able here.

Ref­er­ences

  • Erb J, Hen­ry MJ, Eis­ner F, Obleser J. Audi­to­ry skills and brain mor­phol­o­gy pre­dict indi­vid­ual dif­fer­ences in adap­ta­tion to degrad­ed speech. Neu­ropsy­cholo­gia. 2012 Jul;50(9):2154–64. PMID: 22609577. [Open with Read]
Categories
Auditory Cortex Auditory Speech Processing fMRI Papers Publications Speech

New paper out: McGet­ti­gan et al., Neuropsychologia


Last years’s lab guest and long-time col­lab­o­ra­tor Car­olyn McGet­ti­gan has put out anoth­er one:

Speech com­pre­hen­sion aid­ed by mul­ti­ple modal­i­ties: Behav­iour­al and neur­al interactions

I had the plea­sure to be involved ini­tial­ly, when Car­olyn con­ceived a lot of this, and when things came togeth­er in the end. Car­olyn nice­ly demon­strates how vary­ing audio and visu­al clar­i­ty comes togeth­er with the seman­tic ben­e­fits a lis­ten­er can get from the famous Kalikow SPIN (speech in noise) sen­tences. The data high­light pos­te­ri­or STS and the fusiform gyrus as sites for con­ver­gence of audi­to­ry, visu­al and lin­guis­tic information.

Check it out!

Ref­er­ences

  • McGet­ti­gan C, Faulkn­er A, Altarel­li I, Obleser J, Baver­stock H, Scott SK. Speech com­pre­hen­sion aid­ed by mul­ti­ple modal­i­ties: behav­iour­al and neur­al inter­ac­tions. Neu­ropsy­cholo­gia. 2012 Apr;50(5):762–76. PMID: 22266262. [Open with Read]