We welcome Sung-Joo Lim (KR) & Alex Brandmeyer (US) as new postdoctoral researchers in the group.
Sung-Joo very recently received her Ph.D. from the Carnegie Mellon University, Pittsburgh, PA (US), after
Investigating the Neural Basis of Sound Category Learning within a Naturalistic Incidental Task
See her abstract
Adults have notorious difficulty learning non-native speech categories even with extensive training with standard tasks providing explicit trial-by-trial feedback. Recent research in general auditory category learning demonstrates that videogame-based training, which incorporates features that model the naturalistic learning environment, leads to fast and robust learning of sound categories. Unlike standard tasks, the videogame paradigm does not require overt categorization of or explicit attention to sounds; listeners learn sounds incidentally as the game encourages the functional use of sounds in an environment, in which actions and feedback are tightly linked to task success. These characteristics may engage reinforcement learning systems, which can potentially generate internal feedback signals from the striatum. However, the influence of striatal signals on perceptual learning and plasticity online during training has yet to be established. This dissertation work focuses on the possibility that this type of training can lead to behavioral learning of non-native speech categories, and on the investigation of neural processes postulated to be significant for inducing incidental learning of sound categories within the more naturalistic training environment by using fMRI. Overall, our results suggest that reward-related signals from the striatum influence perceptual representations in regions associated with the processing of reliable information that can improve performance within a naturalistic learning task.
Alex very recently received his Ph.D. from the Radboud University of Nijmegen (NL), addressing his thesis topic with
Auditory brain-computer interfaces for perceptual learning in speech and music
See his abstract
We perceive the sounds in our environment, such as language and music, effortlessly and transparently, unaware of the complex neurophysiological mechanisms that underlie our experiences. Using electroencephalography (EEG) and techniques from the field of machine learning, it’s possible to monitor our perception of the auditory world in real-time and to pinpoint individual differences in perceptual abilities related to native-language background and auditory experience. Going further, these same methods can be used to provide individuals with neurofeedback during auditory perception as a means of modulating brain responses to sounds, with the eventual aim of incorporating these methods into educational settings to aid in auditory perceptual learning.
Wishing you all the best.