direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Perception and Decision Making in Uncertain Environments


Here we investigate - in cooperation with our experimental and clinical partners - how human subjects behave in complex decision making tasks. Using different experimental paradigms and subject groups we study the impact of factual and fictive prediction errors, regret, volatility of returns, risk, and temporal discounting on the decision making process. We construct neurocomputational models, which are based on Markov Decision Processes and Reinforcement Learning, and we use those models to quantify hypotheses about a subject's implicit objective and about the mechanisms underlying decision making and learning. Models are used to quantify the behavior and to understand how humans learn in those tasks especially if contingencies and task demands are changing. Behavioral studies are combined with functional imaging and genetics, which allows us to relate model quantities directly to correlates of neural activity, to individual variations in neurtransmitter systems, and to the genetic disposition of the subjects. We are also interested in how those "cognitive" quantities interact with perception, i.e. how quantities like stimulus expectancies and average rewards are implicitly estimated during sequences perceptual tasks and how those quantities modulate visual processing. Reinforcement and reward-based learning is also investigated in a machine learning context. For details see "Research" page "Approximate Reinforcement Learning".

Acknowledgements: Research is funded by BMBF (via the Bernstein Center and Bernstein Focus funding schemes) and the Technische Universität Berlin.

Selected Publications

Houillon, A., Lorenz, R. C., Boehmer, W., Rapp, M. A., Heinz, A., Gallinat, J. and Obermayer, K. (2013). The effect of novelty on reinforcement learning. Progress in brain research, 202, 415–439.

Shen, Y., Stannat, W. and Obermayer, K. (2013). Risk-sensitive Markov Control Processes. SIAM Journal on Control and Optimization, 51, 3652–3672.

To top

Zusatzinformationen / Extras

Quick Access:

Schnellnavigation zur Seite über Nummerneingabe

Auxiliary Functions