Towards a brain-controlled hearing aid: PhD student Lorenz Fiedler shows how attended and ignored auditory streams are differently represented in the neural responses and how the focus of auditory attention can be extracted from EEG signals recorded at electrodes placed inside the ear-canal and around the ear.
Category: Auditory Neuroscience
Auditory Cognition’s own Malte Wöstmann is in press in Cerebral Cortex with his latest offering on how attentional control manifests in alpha power changes: Ignoring speech can be beneficial (if comprehending speech potentially detracts from another task), and we here show how this change in listening goals turns around the pattern of alpha-power changes with changing speech degradation. (We will update as the paper becomes available online.)
Wöstmann, M., Lim, S.J., & Obleser, J. (2017). The human neural alpha response to speech is a proxy of attentional control. Cerebral Cortex. In press.
Story time: Some time in early 2011, I sat down with an American, fresh PhD graduate who had just joined my new lab, in a Leipzig bar (Café Cantona; if you are interested you can find this great 24⁄7 bar with exquisite food also in the acknowledgments of, e.g., Obleser & Eisner, Trends Cogn Sci, 2009).
To the day, I could still point you to the table she and I sat down at, and the wall I faced (which is notable because we actually spent an unhealthy amount of time and money there over the years). Soon thereafter, we grabbed a beer mat and started scribbling waves and marked where we would place so-called targets (psychologist lingo) and talked a lot of gibberish about frequency modulation. I remember vididly that I had just read an insanely long review paper on neural oscillations by Wolfgang Klimesch (that, more in passing, cited old-school tales of Schmitt filters by the late great Francesco Varela or pioneers sounding like record producers, Dustman & Beck, 1965), while the young American opposite me turned out to be an—if adventurous—die-hard expert on auditory psychophysics.
Who would have thought that this very night would carry me towards tenure in three years’ time, and her around the globe as an esteemed young colleague.
When I nowadays check Google scholar, I am amazed to see that already more than 100 other papers have cited what directly grew out of that beer mat one and a half years later—not counting the many more papers this said postdoc, Molly Henry, has produced since.
Here is the link to how excited we were when the paper appeared in PNAS in 2012, and a link to the little movie a german science program kindly produced on all of this in 2013.
Very proud: PhD student Lorenz Fiedler goes live (pre-peer-review) with his work of predicting the focus of attention in single-channel/forward models in in-ear EEG!
Here is the preprint of the paper, which now will undergo peer-review. Thanks for checking it out!
A review article for those interested in how to use magneto-/electroencephalography (M/EEG) to study speech comprehension. We provide a historically informed overview over dependent measures in the time and frequency domain, highlight recent advances resulting from these measures and review the notorious challenges and solutions speech and language researchers are faced with when studying electrophysiological brain responses.
Now available online:
http://www.tandfonline.com/doi/full/10.1080/23273798.2016.1262051
An article by our new AC group member Michael Plöchl from his PhD project in Osnabrück has been accepted for publication in Scientific Reports. In their study, Plöchl, Gaston, Mermagen, König and Hairston demonstrate that “Oscillatory activity in auditory cortex reflects the perceptual level of audio-tactile integration”.
For those interested in auditory cortex and how a regime of predictions, prediction updates and surprise (a version of “prediction error”) might be implemented there, I contributed a brief featurette (“insight”, they call it) to eLife on a recent paper by Will Sedley, Tim Griffiths, and others. Check it out.
Wöstmann, Herrmann, Maess and Obleser demonstrate that the hemispheric lateralization of neural alpha oscillations measured in the magnetoencephalogram (MEG) synchronizes with the speech signal and predicts listeners’ speech comprehension.
Now available online:
http://www.pnas.org/content/early/2016/03/18/1523357113
Press release: