We are excited to share that former Obleserlab PhD student Leo Waschke, together with his new (Doug Garrett, Niels Kloosterman) and old (Jonas Obleser) lab has published an in-depth perspective piece in Neuron, with the provocative title “Behavior need neural variability”.
Our article is essentially a long and extensive tribute to the “second moment” of neural activity, in statistical terms, essentially: Variability — be it quantified as variance, entropy, or spectral slope — is the long-neglected twin of averages, and it holds great promise in understanding neural states (how does neural activity differ from one moment to the next?) and traits (how do individuals differ from each other?).
Congratulations, Leo!
Category: EEG / MEG
Obleserlab senior PhD student Leo Waschke, alongside co-authors Sarah Tune and Jonas Obleser, has a new paper in eLife.
The processing of sensory information from our environment is not constant but rather varies with changes in ongoing brain activity, or brain states. Thus, also the acuity of perceptual decisions depends on the brain state during which sensory information is processed. Recent work in non-human animals suggests two key processes that shape brain states relevant for sensory processing and perceptual performance. On the one hand, the momentary level of neural desynchronization in sensory cortical areas has been shown to impact neural representations of sensory input and related performance. On the other hand, the current level of arousal and related noradrenergic activity has been linked to changes in sensory processing and perceptual acuity.
However, it is unclear at present, whether local neural desynchronization and arousal pose distinct brain states that entail varying consequences for sensory processing and behaviour or if they represent two interrelated manifestations of ongoing brain activity and jointly affect behaviour. Furthermore, the exact shape of the relationship between perceptual performance and each of both brain states markers (e.g. linear vs. quadratic) is unclear at present.
In order to transfer findings from animal physiology to human cognitive neuroscience and test the exact shape of unique as well as shared influences of local cortical desynchronization and global arousal on sensory processing and perceptual performance, we recorded electroencephalography and pupillometry in 25 human participants while they performed a challenging auditory discrimination task.
Importantly, auditory stimuli were selectively presented during periods of especially high or low auditory cortical desynchronization as approximated by an information theoretic measure of time-series complexity (weighted permutation entropy). By means of a closed-loop real time setup we were not only able to present stimuli during different desynchronization states but also made sure to sample the whole distribution of such states, a prerequisite for the accurate assessment of brain-behaviour relationships. The recorded pupillometry data additionally enabled us to draw inferences regarding the current level of arousal due to the established link between noradrenergic activity and pupil size.
Single trial analyses of EEG activity, pupillometry and behaviour revealed clearly dissociable influences of both brain state markers on ongoing brain activity, early sound-related activity and behaviour. High desynchronization states were characterized by a pronounced reduction in oscillatory power across a wide frequency range while high arousal states coincided with a decrease in oscillatory power that was limited to high frequencies. Similarly, early sound-evoked activity was differentially impacted by auditory cortical desynchronization and pupil-linked arousal. Phase-locked responses and evoked gamma power increased with local desynchronization with a tendency to saturate at intermediate levels. Post-stimulus low frequency power on the other hand, increased with pupil-linked arousal.
Most importantly, local desynchronization and pupil-linked arousal displayed different relationships with perceptual performance. While participants performed fastest and least biased following intermediate levels of auditory cortical desynchronization, intermediate levels of pupil-linked arousal were associated with highest sensitivity. Thus, although both processes pose behaviourally relevant brain states that affect perceptual performance following an inverted u, they impact distinct subdomains of behaviour. Taken together, our results speak to a model in which independent states of local desynchronization and global arousal jointly shape states for optimal sensory processing and perceptual performance. The published manuscript including all supplemental information can be found here.
Wöstmann, Alavash and Obleser demonstrate that alpha oscillations in the human brain implement distractor suppression independent of target selection.
In theory, the ability to selectively focus on relevant objects in our environment bases on selection of targets and suppression of distraction. As it is unclear whether target selection and distractor suppression are independent, we designed an Electroencephalography (EEG) study to directly contrast these two processes.
Participants performed a pitch discrimination task on a tone sequence presented at one loudspeaker location while a distracting tone sequence was presented at another location. When the distractor was fixed in the front, attention to upcoming targets on the left versus right side induced hemispheric lateralisation of alpha power with relatively higher power ipsi- versus contralateral to the side of attention.
Critically, when the target was fixed in front, suppression of upcoming distractors reversed the pattern of alpha lateralisation, that is, alpha power increased contralateral to the distractor and decreased ipsilaterally. Since the two lateralized alpha responses were uncorrelated across participants, they can be considered largely independent cognitive mechanisms.
This was further supported by the fact that alpha lateralisation in response to distractor suppression originated in more anterior, frontal cortical regions compared with target selection (see figure).
The paper is also available as preprint here.
In this three-year project, we will use the auditory modality as a test case to investigate how the suppression of distracting information (i.e., “filtering”) is neurally implemented. While it is known that the attentional sampling of targets (a) is rhythmic, (b) can be entrained, and © is modulated by top-down predictions, the existence and neural implementation of these mechanisms for the suppression of distractors is at present unclear. To test this, we will use adaptations of established behavioural paradigms of distractor suppression and recordings of human electrophysiological signals in the Magento-/ Electroencephalogram (M/EEG).
Listening requires selective neural processing of the incoming sound mixture, which in humans is borne out by a surprisingly clean representation of attended-only speech in auditory cortex. How this neural selectivity is achieved even at negative signal-to-noise ratios (SNR) remains unclear. We show that, under such conditions, a late cortical representation (i.e., neural tracking) of the ignored acoustic signal is key to successful separation of attended and distracting talkers (i.e., neural selectivity). We recorded and modeled the electroencephalographic response of 18 participants who attended to one of two simultaneously presented stories, while the SNR between the two talkers varied dynamically between +6 and −6 dB. The neural tracking showed an increasing early-to-late attention-biased selectivity. Importantly, acoustically dominant (i.e., louder) ignored talkers were tracked neurally by late involvement of fronto-parietal regions, which contributed to enhanced neural selectivity. This neural selectivity, by way of representing the ignored talker, poses a mechanistic neural account of attention under real-life acoustic conditions.
The paper is available here.
What is the mechanistic relevance of neural alpha oscillations (~10 Hz) for perception? To answer this question, we analysed EEG data from a task that required participants to compare the pitch of two tones that were, unbeknownst to participants, identical. Importantly, this task entirely removed potential confounds of varying evidence in the stimulus or varying accuracy. We found that higher prestimulus alpha power correlated with lower confidence in pitch discrimination. These results demonstrate that the relationship of prestimulus alpha power and decision confidence is direct in nature and, that it shows up in the auditory modality similar to what has been shown before in vision and somatosensation. Our findings support the view that lower prestimulus alpha power enhances neural baseline excitability.
The paper is available as preprint here.
Congratulations to Obleserlab alumna Anna Wilsch, who is – for now – leaving academia on a true high with her latest offering on how temporal expectations (“foreknowledge” about when something is to happen) shape the neural make-up of memory!
Recorded while the Obleserlab was still in Leipzig at the Max Planck, and analysed with great input from our co-authors Molly Henry, Björn Herrmann as well as Christoph Herrmann (Oldenburg), Anna used Magnetoencephalography in an intricate but ultimately very simple sensory-memory paradigm.
While sensory memories of the physical world fade quickly, Anna here shows that this decay of short-term memory can be counteracted by temporal expectation.
Notably, spatially distributed cortical patterns of alpha (8−−13 Hz) power showed opposing effects in auditory vs. visual sensory cortices. Moreover, alpha-tuned connectivity changes within supramodal attention networks reflect the allocation of neural resources as short-term memory representations fade.
— to be updated as the paper will become available online –
AC alumna Anna Wilsch has a new paper in press in Neuroimage, with Toralf Neuling, Jonas Obleser, and Christoph Herrmann: “Transcranial alternating current stimulation with speech envelopes modulates speech comprehension”. In this proof-of-concept–like paper, we demonstrate that using the speech envelope as a “pilot signal” for electrically stimulating the human brain, while a listener tries to comprehend that speech signal buried in noise, does modulate the listener’s speech–in–noise comprehension abilities.