direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Neural Information Processing Group

We are concerned with the principles underlying information processing in biological systems. On the one hand we want to understand how the brain computes, on the other hand we want to utilize the strategies employed by biological systems for machine learning applications. Our research interests cover three thematic areas.

Models of Neuronal Systems:

Lupe

In collaboration with neurobiologists and clinicians we study how the visual system processes visual information. Research topics include: cortical dynamics, the representation of visual information, adaptation and plasticity, and the role of feedback. More recently we became interested in how perception is linked to cognitive function, and we began to study computational models of decision making in uncertain environments, and how those processes interact with perception and memory.

Machine Learning and Neural Networks:

Lupe

Here we investigate how machines can learn from examples in order to predict and (more recently) act. Research topics include the learning of proper representations, active and semisupervised learning schemes, and prototype-based methods. Motivated by the model-based analysis of decision making in humans we also became interested in reinforcement learning schemes and how these methods can be extended to cope with multi-objective cost functions. In collaboration with colleagues from the application domains, machine learning methods are applied to different problems ranging from computer vision, information retrieval, to chemoinformatics.

Analysis of Neural Data:

Lupe

Here we are interested to apply machine learning and statistical methods to the analysis of multivariate biomedical data, in particular to data which form the basis of our computational studies of neural systems. Research topics vary and currently include spike-sorting and the analysis of multi-tetrode recordings, confocal microscopy and 3D-reconstruction techniques, and the analysis of imaging data. Recently we became interested in the analysis of multimodal data, for example, correlating anatomical, imaging, and genetic data.

Selected Publications

Interaction of Instrumental and Goal-directed Learning Modulates Prediction Error Representations in the Ventral Striatum
Citation key Guo2016
Author Guo, R. and Böhmer, W. and Hebart, M. and Chien, S. and Sommer, T. and Obermayer, K.* and Gläscher, J.*
Pages 12650-12660
Year 2016
DOI https://doi.org/10.1523/JNEUROSCI.1677-16.2016
Journal Journal of Neuroscience
Volume 36
Number 50
Month Dec
Abstract Goal-directed and instrumental learning are both important controllers of human behavior. Learning about which stimulus event occurs in the environment and the reward associated with them allows humans to seek out the most valuable stimulus and move through the environment in a goal-directed manner. Stimulus–response associations are characteristic of instrumental learning, whereas response–outcome associations are the hallmark of goal-directed learning. Here we provide behavioral, computational, and neuroimaging results from a novel task in which stimulus–response and response–outcome associations are learned simultaneously but dominate behavior at different stages of the experiment. We found that prediction error representations in the ventral striatum depend on which type of learning dominates. Furthermore, the amygdala tracks the time-dependent weighting of stimulus–response versus response–outcome learning. Our findings suggest that the goal-directed and instrumental controllers dynamically engage the ventral striatum in representing prediction errors whenever one of them is dominating choice behavior. SIGNIFICANCE STATEMENT Converging evidence in human neuroimaging studies has shown that the reward prediction errors are correlated with activity in the ventral striatum. Our results demonstrate that this region is simultaneously correlated with a stimulus prediction error. Furthermore, the learning system that is currently dominating behavioral choice dynamically engages the ventral striatum for computing its prediction errors. This demonstrates that the prediction error representations are highly dynamic and influenced by various experimental context. This finding points to a general role of the ventral striatum in detecting expectancy violations and encoding error signals regardless of the specific nature of the reinforcer itself.
Bibtex Type of Publication Selected:main selected:structured selected:publications
Link to original publication Download Bibtex entry

To top