direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Wendelin Böhmer, MSc


Raum: MAR 5056
Tel: 314-73441
Fax: 314-73121

Sekretariat MAR 5-6
Marchstr. 23
D-10587 Berlin

Research Interests: Structural leverage in approximate reinforcement learning, machine learning, autonomous learning, artificial intelligence.

Curriculum Vitae

NI, TU-Berlin
Research/teaching assistant associated with the DFG project Linking metric and symbolic levels in autonomous reinforcement learning (SPP 1527 autonomous learning).
NI, TU-Berlin
Researcher in DFG project Value representation in large factored state spaces (SPP 1527 autonomous learning).
NI, TU-Berlin
Scholarship of H-C3 IGP, topic State representation in approximate reinforcement learning.
NI, TU-Berlin
Student assistant in the DFG project NeuRoBot.
SWT, TU-Berlin
Tutor for basic and advanced computer sciences courses.
Fak IV, TU-Berlin
Computer science diploma. Thesis: robot navigation using reinforcement learning and slow feature analysis.


Approximate Reinforcement Learning

Böhmer, W., Guo, R. and Obmerayer, K. (2016). Non-deterministic Policy Improvement Stabilizes Approximate Reinforcement Learning. 13th European Workshop on Reinforcement Learning.

Böhmer, W. and Obermayer, K. (2015). Regression with Linear Factored Functions. Proceedings to ECML/PKKD 2015 in Machine Learning and Knowledge Discovery in Databases, Volume 9284 of Lecture Notes in Computer Science, pp 119–134.

Böhmer, W., Springenberg, J.T., Boedecker, J., Riedmiller, M., and Obermayer, K. (2015). Autonomous Learning of State Representations for Control: an emerging field aims to autonomously learn state representations for reinforcement learning agents from their real-world sensor observations. KI - Künstliche Intelligenz 29(4): 353-362.

Böhmer, W., Grünewälder, S., Shen, Y., Musial, M., and Obermayer, K. (2013). Construction of Approximation Spaces for Reinforcement Learning. Journal of Machine Learning Research, 14:2067–2118.

Böhmer, W., and Obermayer, K. (2013). Towards Structural Generalization: Factored Approximate Planning. ICRA Workshop on Autonomous Learning.

Slow Feature Analysis

Böhmer, W., Grünewälder, S., Nickisch, H., and Obermayer, K. (2012). Generating feature spaces for linear algorithms with regularized sparse kernel slow feature analysis. Machine Learning, 89:67–86.

Böhmer, W., Grünewälder, S., Nickisch, H., and Obermayer, K. (2011). Regularized Sparse Kernel Slow Feature Analysis. Machine Learning and Knowledge Discovery in Databases, Part I. Springer-Verlag Berlin Heidelberg, 235–248.

Modelling Cognitive Decision Making

Guo, R., Böhmer, W., Hebart, M., Chien, S., Sommer, T., Obermayer, K. and Gläscher, J. (2016). Interaction of Instrumental and Goal-directed Learning Modulates Prediction Error Representations in the Ventral Striatum. Journal of Neuroscience, 36, 12650-12660.

Tobia, M., Guo, R., Schwarze, U., Böhmer, W., Gläscher, J., Finckh, B., Marschner, A., Büchel, C., Obermayer, K., and Sommer, T. (2014). Neural systems for choice and valuation with counterfactual learning signals. Neuroimage 89:57-69.

Houillon, A., Lorenz, R., Boehmer, W., Rapp, M., Heinz, A., Gallinat, J., and Obermayer, K. (2013). The effect of novelty on reinforcement learning. Progress in brain research 202:415-439.

Zusatzinformationen / Extras

Quick Access:

Schnellnavigation zur Seite über Nummerneingabe

Auxiliary Functions