Categories
Acoustics Familiarity Papers Perception Publications Voice

New Paper in Cog­ni­tion by Lavan, Kre­it­e­wolf et al.

Con­grat­u­la­tions to for­mer Obleser post­doc Jens Kre­it­e­wolf (now at McGill Uni­ver­si­ty) for his new paper in Cog­ni­tion, “Famil­iar­i­ty and task con­text shape the use of acoustic infor­ma­tion in voice iden­ti­ty perception”! 

Togeth­er with our col­leagues from Lon­don, Nadine Lavan and Car­olyn McGet­ti­gan, we took a new approach to test the long­stand­ing the­o­ret­i­cal claim that lis­ten­ers dif­fer in their use of acoustic infor­ma­tion when per­ceiv­ing iden­ti­ty from famil­iar and unfa­mil­iar voic­es. Unlike pre­vi­ous stud­ies that have relat­ed sin­gle acoustic fea­tures to voice iden­ti­ty per­cep­tion, we linked lis­ten­ers’ voice-iden­ti­ty judg­ments to more com­plex acoustic representations—that is, the spec­tral sim­i­lar­i­ty of voice  record­ings (see Fig­ure below).

This new study has a direct link to pop cul­ture (by cap­ti­laz­ing on nat­u­ral­ly-vary­ing voice record­ings tak­en from the famous TV show Break­ing Bad) and chal­lenges tra­di­tion­al pro­pos­als that view famil­iar and unfa­mil­iar voice per­cep­tion as being dis­tinct at all times.

Click here to find out more.

Categories
Attention Auditory Neuroscience EEG / MEG Papers Publications Speech Tracking Unilateral Vocoding

New Paper in Trends in Hear­ing by Kraus et al.

Frauke Kraus, Sarah Tune, Anna Ruhe, Jonas Obleser & Malte Wöst­mann demon­strate that uni­lat­er­al acoustic degra­da­tion delays atten­tion­al sep­a­ra­tion of com­pet­ing speech.

Uni­lat­er­al cochlear implant (CI) users have to inte­grate acousti­cal­ly intact speech on one ear and acousti­cal­ly degrad­ed speech on the oth­er ear. How inter­act uni­lat­er­al acoustic degra­da­tion and spa­tial atten­tion in a mul­titalk­er situation?
N = 22 par­tic­i­pants took part in a com­pet­ing lis­ten­ing exper­i­ment while lis­ten­ing to an intact audio­book under dis­trac­tion of an acousti­cal­ly degrad­ed audio­book and vice ver­sa. Speech track­ing revealed not per se reduced atten­tion­al sep­a­ra­tion of acousti­cal­ly degrad­ed speech but instead a delay in time com­pared to intact speech. These find­ings might explain lis­ten­ing chal­lenges expe­ri­enced by uni­lat­er­al CI users.

To learn more, the paper is avail­able here.

Categories
Ageing Auditory Cortex Auditory Neuroscience EEG / MEG Hearing Loss Neural Filters Papers Publications

New paper in Nature Com­mu­ni­ca­tions by Tune et al.

We are very excit­ed to share that Oble­ser­lab post­doc Sarah Tune has a new paper in Nature Com­mu­ni­ca­tions. „Neur­al atten­tion­al-fil­ter mech­a­nisms of lis­ten­ing suc­cess in mid­dle-aged and old­er par­tic­i­pants“ is our lat­est and to-date most exten­sive out­put of the lon­gi­tu­di­nal ERC Con­sol­ida­tor project on adap­tive lis­ten­ing in age­ing indi­vid­ual (AUDADAPT — include link to https://auditorycognition.com/erc-audadapt/).

This co-pro­duc­tion with cur­rent (Mohsen Alavash and Jonas Obleser) and for­mer (Lorenz Fiedler) Oble­ser­lab mem­bers, takes an in-depth and inte­gra­tive look at how two of the most exten­sive­ly stud­ied neu­ro­bi­o­log­i­cal atten­tion­al-fil­ter imple­men­ta­tions, alpha pow­er lat­er­al­iza­tion and selec­tive neur­al speech track­ing, relate to one anoth­er and to lis­ten­ing sucess.

Lever­ag­ing our large, rep­re­sen­ta­tive sam­ple of aging lis­ten­ers (N=155, 39–80 years), we show that both neur­al fil­ter imple­men­tatins are robust­ly mod­u­lat­ed by atten­tion but oper­ate sur­prins­ing­ly inde­pen­dent of one another.

In a series of sophis­ti­cat­ed sin­gle-tri­al lin­ear mod­els that include vari­a­tion in neur­al fil­ter strength with­in and between indi­vid­u­als, we demon­strate how the pref­er­en­tial neur­al track­ing of attend­ed ver­sus ignored speech but not alpha lat­er­al­iza­tion boosts lis­ten­ing success.

To learn more, the paper is avail­able here.

Categories
Ageing EEG / MEG fMRI Papers Publications

New Per­spec­tive paper in Neu­ron by Waschke et al.

We are excit­ed to share that for­mer Oble­ser­lab PhD stu­dent Leo Waschke, togeth­er with his new (Doug Gar­rett, Niels Kloost­er­man) and old (Jonas Obleser) lab has pub­lished an in-depth per­spec­tive piece in Neu­ron, with the provoca­tive title “Behav­ior need neur­al vari­abil­i­ty”.
Our arti­cle is essen­tial­ly a long and exten­sive trib­ute to the “sec­ond moment” of neur­al activ­i­ty, in sta­tis­ti­cal terms, essen­tial­ly: Vari­abil­i­ty — be it quan­ti­fied as vari­ance, entropy, or spec­tral slope — is the long-neglect­ed twin of aver­ages, and it holds great promise in under­stand­ing neur­al states (how does neur­al activ­i­ty dif­fer from one moment to the next?) and traits (how do indi­vid­u­als dif­fer from each other?).
Con­grat­u­la­tions, Leo!

Categories
Uncategorized

New paper in Schiz­o­phre­nia Bul­letin Open: Erb et al., Aber­rant per­cep­tu­al judge­ments on speech-rel­e­vant acoustic fea­tures in hal­lu­ci­na­tion-prone individuals

Hal­lu­ci­na­tions – per­cepts in the absence of an exter­nal stim­u­lus – con­sti­tute an intrigu­ing mod­el of how per­cepts are gen­er­at­ed and how per­cep­tion can fail. They can occur in psy­chot­ic dis­or­ders, but also in the gen­er­al population.
Healthy adults vary­ing in their pre­dis­po­si­tion to hal­lu­ci­na­tions were asked to iden­ti­fy “speech” in ambigu­ous sounds. Lis­ten­ers qual­i­fy­ing as more hal­lu­ci­na­tion-prone in two estab­lished ques­tion­naires per­cep­tu­al­ly down-weight­ed the speech-typ­i­cal low fre­quen­cies (pur­ple sub­group in the fig­ure for illus­tra­tion). Instead, the hal­lu­ci­na­tion-prone indi­vid­u­als pri­ori­tised high fre­quen­cies in their “speech­i­ness” judge­ments of ambigu­ous sounds.
At the same time, the high­er one scored on hal­lu­ci­na­tion-prone­ness, the more con­fi­dent on a giv­en (always ambigu­ous!) tri­al they were. Hal­lu­ci­na­tion-prone­ness and actu­al sen­so­ry evi­dence had a com­pa­ra­ble impact on con­fi­dence, con­sis­tent with the idea that the emer­gence of hal­lu­ci­na­tions is root­ed in an altered per­cep­tion of sounds.
This research may con­tribute to improv­ing ear­ly diag­no­sis and pre­ven­tion strate­gies in
indi­vid­u­als at risk for psychosis.

From the abstract:
“Hal­lu­ci­na­tions con­sti­tute an intrigu­ing mod­el of how per­cepts are gen­er­at­ed and how per­cep­tion can fail. Here, we inves­ti­gate the hypoth­e­sis that an altered per­cep­tu­al weight­ing of the spec­tro-tem­po­ral mod­u­la­tions that char­ac­ter­ize speech con­tributes to the emer­gence of audi­to­ry ver­bal hal­lu­ci­na­tions. Healthy adults (N=168) vary­ing in their pre­dis­po­si­tion for hal­lu­ci­na­tions had to choose the ‘more speech-like’ of two pre­sent­ed ambigu­ous sound tex­tures and give a con­fi­dence judge­ment. Using psy­chophys­i­cal reverse cor­re­la­tion, we quan­ti­fied the con­tri­bu­tion of dif­fer­ent acoustic fea­tures to a listener’s per­cep­tu­al deci­sions. High­er hal­lu­ci­na­tion prone­ness covar­ied with per­cep­tu­al down-weight­ing of speech-typ­i­cal, low-fre­quen­cy acoustic ener­gy while pri­ori­tis­ing high fre­quen­cies. Remark­ably, high­er con­fi­dence judge­ments in sin­gle tri­als depend­ed not only on acoustic evi­dence but also on an individual’s hal­lu­ci­na­tion prone­ness and schizo­typy score. In line with an account of altered per­cep­tu­al pri­ors and dif­fer­en­tial weight­ing of sen­so­ry evi­dence, these results show that hal­lu­ci­na­tion-prone indi­vid­u­als exhib­it qual­i­ta­tive and quan­ti­ta­tive changes in their per­cep­tion of the mod­u­la­tions typ­i­cal for speech.”
The paper is avail­able here.

Categories
Uncategorized

New paper in press in Neuropsychologia

Wöst­mann, Lui, Friese, Kre­it­e­wolf, Nau­jokat and Obleser demon­strate that the vul­ner­a­bil­i­ty of work­ing mem­o­ry to audi­to­ry dis­trac­tion is rhythmic.

Pre­vi­ous research has shown that the atten­tion­al sam­pling of tar­get stim­uli is rhyth­mic at ~3–8 Hz (e.g. Fiebelko­rn et al. 2013; Lan­dau & Fries, 2012). In the present study, Malte Wöst­mann and col­leagues test­ed to what extent the sup­pres­sion of dis­trac­tor stim­uli would be rhyth­mic, as well. Indeed, two mea­sures of dis­trac­tion – mem­o­ry recall accu­ra­cy and the dis­trac­tor-evoked N1 ERP com­po­nent – were peri­od­i­cal­ly mod­u­lat­ed at slow fre­quen­cies (~2–4 Hz) by the tem­po­ral onset of a dis­tract­ing speech stimulus.

In a fol­low-up exper­i­ment, the rhyth­mic dis­tractibil­i­ty could be repli­cat­ed: In a visu­al match-to-sam­ple task, mem­o­ry recall accu­ra­cy was peri­od­i­cal­ly mod­u­lat­ed at ~2.75 Hz by the onset of a dis­tract­ing noise stim­u­lus dur­ing mem­o­ry retention.

The paper is avail­able here.

For a preprint of the paper, see 

Categories
Editorial Notes Events Uncategorized

New mem­bers in the Obleser lab

In the Obleser lab, we wel­come new mem­bers and PhD stu­dents Mar­tin Orf and Tro­by Lui.
Mar­tin did his MSc in Audi­ol­o­gy Tech­nol­o­gy here at the Uni­ver­si­ty of Lübeck. He is now join­ing us for PhD project fund­ed gen­er­ous­ly by our indus­try part­ner Widex Sivan­tos Audi­ol­o­gy,
revolv­ing around hear­ing aids and elec­troen­cephalo­graph­ic sig­na­tures of attention.
Tro­by did her MSc in Psy­chol­o­gy at Hong-Kong Uni­ver­si­ty and has already pub­lished in Neu­roim­age. She will do her PhD under direct super­vi­sion of Malte Wöst­mann in our lab, work­ing on a DFG-fund­ed project on atten­tion­al rhythms.
We also bid farewell to long-time Oble­ser­lab ally and PhD stu­dent Leo Waschke, who fin­ished his PhD on a high, and who is now a post­doc in Doug Garrett’s lab at the Max Planck Insti­tute in Berlin.
Hel­lo and Goodbye!
Categories
Auditory Neuroscience Auditory Perception EEG / MEG Papers Perception Uncategorized

New paper in press in elife: Waschke et al.

Oble­ser­lab senior PhD stu­dent Leo Waschke, along­side co-authors Sarah Tune and Jonas Obleser, has a new paper in eLife.

The pro­cess­ing of sen­so­ry infor­ma­tion from our envi­ron­ment is not con­stant but rather varies with changes in ongo­ing brain activ­i­ty, or brain states. Thus, also the acu­ity of per­cep­tu­al deci­sions depends on the brain state dur­ing which sen­so­ry infor­ma­tion is processed. Recent work in non-human ani­mals sug­gests two key process­es that shape brain states rel­e­vant for sen­so­ry pro­cess­ing and per­cep­tu­al per­for­mance. On the one hand, the momen­tary lev­el of neur­al desyn­chro­niza­tion in sen­so­ry cor­ti­cal areas has been shown to impact neur­al rep­re­sen­ta­tions of sen­so­ry input and relat­ed per­for­mance. On the oth­er hand, the cur­rent lev­el of arousal and relat­ed nora­dren­er­gic activ­i­ty has been linked to changes in sen­so­ry pro­cess­ing and per­cep­tu­al acuity.

How­ev­er, it is unclear at present, whether local neur­al desyn­chro­niza­tion and arousal pose dis­tinct brain states that entail vary­ing con­se­quences for sen­so­ry pro­cess­ing and behav­iour or if they rep­re­sent two inter­re­lat­ed man­i­fes­ta­tions of ongo­ing brain activ­i­ty and joint­ly affect behav­iour. Fur­ther­more, the exact shape of the rela­tion­ship between per­cep­tu­al per­for­mance and each of both brain states mark­ers (e.g. lin­ear vs. qua­drat­ic) is unclear at present.

In order to trans­fer find­ings from ani­mal phys­i­ol­o­gy to human cog­ni­tive neu­ro­science and test the exact shape of unique as well as shared influ­ences of local cor­ti­cal desyn­chro­niza­tion and glob­al arousal on sen­so­ry pro­cess­ing and per­cep­tu­al per­for­mance, we record­ed elec­troen­cephalog­ra­phy and pupil­lom­e­try in 25 human par­tic­i­pants while they per­formed a chal­leng­ing audi­to­ry dis­crim­i­na­tion task.

Impor­tant­ly, audi­to­ry stim­uli were selec­tive­ly pre­sent­ed dur­ing peri­ods of espe­cial­ly high or low audi­to­ry cor­ti­cal desyn­chro­niza­tion as approx­i­mat­ed by an infor­ma­tion the­o­ret­ic mea­sure of time-series com­plex­i­ty (weight­ed per­mu­ta­tion entropy). By means of a closed-loop real time set­up we were not only able to present stim­uli dur­ing dif­fer­ent desyn­chro­niza­tion states but also made sure to sam­ple the whole dis­tri­b­u­tion of such states, a pre­req­ui­site for the accu­rate assess­ment of brain-behav­iour rela­tion­ships. The record­ed pupil­lom­e­try data addi­tion­al­ly enabled us to draw infer­ences regard­ing the cur­rent lev­el of arousal due to the estab­lished link between nora­dren­er­gic activ­i­ty and pupil size.

 

Sin­gle tri­al analy­ses of EEG activ­i­ty, pupil­lom­e­try and behav­iour revealed clear­ly dis­so­cia­ble influ­ences of both brain state mark­ers on ongo­ing brain activ­i­ty, ear­ly sound-relat­ed activ­i­ty and behav­iour. High desyn­chro­niza­tion states were char­ac­ter­ized by a pro­nounced reduc­tion in oscil­la­to­ry pow­er across a wide fre­quen­cy range while high arousal states coin­cid­ed with a decrease in oscil­la­to­ry pow­er that was lim­it­ed to high fre­quen­cies. Sim­i­lar­ly, ear­ly sound-evoked activ­i­ty was dif­fer­en­tial­ly impact­ed by audi­to­ry cor­ti­cal desyn­chro­niza­tion and pupil-linked arousal. Phase-locked respons­es and evoked gam­ma pow­er increased with local desyn­chro­niza­tion with a ten­den­cy to sat­u­rate at inter­me­di­ate lev­els. Post-stim­u­lus low fre­quen­cy pow­er on the oth­er hand, increased with pupil-linked arousal.

Most impor­tant­ly, local desyn­chro­niza­tion and pupil-linked arousal dis­played dif­fer­ent rela­tion­ships with per­cep­tu­al per­for­mance. While par­tic­i­pants per­formed fastest and least biased fol­low­ing inter­me­di­ate lev­els of audi­to­ry cor­ti­cal desyn­chro­niza­tion, inter­me­di­ate lev­els of pupil-linked arousal were asso­ci­at­ed with high­est sen­si­tiv­i­ty. Thus, although both process­es pose behav­ioural­ly rel­e­vant brain states that affect per­cep­tu­al per­for­mance fol­low­ing an invert­ed u, they impact dis­tinct sub­do­mains of behav­iour. Tak­en togeth­er, our results speak to a mod­el in which inde­pen­dent states of local desyn­chro­niza­tion and glob­al arousal joint­ly shape states for opti­mal sen­so­ry pro­cess­ing and per­cep­tu­al per­for­mance. The pub­lished man­u­script includ­ing all sup­ple­men­tal infor­ma­tion can be found here.