Courses in the winter semester 2021/22
Please note that due to the IT attack on the TU Berlin, it is currently not possible to edit this website. So unfortunately not all information is kept up to date.
You can therefore find our course offer in the winter semester 2021/22 here .
Reduced course portfolio due to COVID-19 pandemic
Due to the ongoing COVID-19 pandemic the Neural Information Processing group might not offer all courses for the SoSe 2021. Further information of the execution can be found on ISIS.
The following courses will be offered:
- Praktisches Programmieren und Rechneraufbau
- Machine intelligence II
- Einführung in die Informatik . Vertiefung
The following courses might take place (not decided yet):
- Advanced topics in reinforcement learning
Neural Information Processing Group
We are concerned with the principles underlying information processing in biological systems. On the one hand we want to understand how the brain computes, on the other hand we want to utilize the strategies employed by biological systems for machine learning applications. Our research interests cover three thematic areas.
Models of Neuronal Systems:
- © NI, TUB
In collaboration with neurobiologists and clinicians we study how the visual system processes visual information. Research topics include: cortical dynamics, the representation of visual information, adaptation and plasticity, and the role of feedback. More recently we became interested in how perception is linked to cognitive function, and we began to study computational models of decision making in uncertain environments, and how those processes interact with perception and memory.
Machine Learning and Neural Networks:
- © NI, TUB
Here we investigate how machines can learn from examples in order to predict and (more recently) act. Research topics include the learning of proper representations, active and semisupervised learning schemes, and prototype-based methods. Motivated by the model-based analysis of decision making in humans we also became interested in reinforcement learning schemes and how these methods can be extended to cope with multi-objective cost functions. In collaboration with colleagues from the application domains, machine learning methods are applied to different problems ranging from computer vision, information retrieval, to chemoinformatics.
Analysis of Neural Data:
- © NI, TUB
Here we are interested to apply machine learning and statistical methods to the analysis of multivariate biomedical data, in particular to data which form the basis of our computational studies of neural systems. Research topics vary and currently include spike-sorting and the analysis of multi-tetrode recordings, confocal microscopy and 3D-reconstruction techniques, and the analysis of imaging data. Recently we became interested in the analysis of multimodal data, for example, correlating anatomical, imaging, and genetic data.
|Author||Böhmer, W. and Grünewälder, S. and Shen, Y. and Musial, M. and Obermayer, K.|
|Journal||Journal of Machine Learning Research|
|Abstract||Linear reinforcement learning (RL) algorithms like least-squares temporal difference learning (LSTD) require basis functions that span approximation spaces of potential value functions. This article investigates methods to construct these bases from samples. We hypothesize that an ideal approximation spaces should encode diffusion distances and that slow feature analysis (SFA) constructs such spaces. To validate our hypothesis we provide theoretical statements about the LSTD value approximation error and induced metric of approximation spaces constructed by SFA and the state-of-the-art methods Krylov bases and proto-value functions (PVF). In particular, we prove that SFA minimizes the average (over all tasks in the same environment) bound on the above approximation error. Compared to other methods, SFA is very sensitive to sampling and can sometimes fail to encode the whole state space. We derive a novel importance sampling modification to compensate for this effect. Finally, the LSTD and least squares policy iteration (LSPI) performance of approximation spaces constructed by Krylov bases, PVF, SFA and PCA is compared in benchmark tasks and a visual robot navigation experiment (both in a realistic simulation and with a robot). The results support our hypothesis and suggest that (i) SFA provides subspace-invariant features for MDPs with self-adjoint transition operators, which allows strong guarantees on the approximation error, (ii) the modified SFA algorithm is best suited for LSPI in both discrete and continuous state spaces and (iii) approximation spaces encoding diffusion distances facilitate LSPI performance.|
|Bibtex Type of Publication||Selected:main selected:reinforcement selected:publications|
Prof. Dr. rer. nat. Klaus Obermayer
Room MAR 5043
e-mail query 
registration via email
During the restricted acces to TU buildings in reacion to the Covid-19 pandemic, it is nescessary to register per email for the office hour of Prof. Obermayer.
Please send an email with some days in advance to explain your concern. If it is not possible to solve it by email, you will receive an email at the time of the office hour (Wed, 12-1 pm) including a link which will allow to participate in a video conference with Prof. Obermayer.
All requets will be handled first-in-first-out. Please stay tuned for the whloe time of the office hour.
Room MAR 5042
Fon: +49 30 314 73442
Fax: +49 30 314 73121
e-mail query 
We 9am - 11am
Departmental reseach labs
- Cognitive Systems 
- Data Analytics & Cloud 
- Future Internet & Media Technology 
- Cyber-Physical Systems 
Collaborative projects in research and education
- Bernstein Center for Computational Neuroscience 
- Research Training Group "Sensory Computation in Neural Systems" 
- Graduate School Mind and Brain 
- International Master-Program in Computational Neuroscience 
- Einstein Center Neuroscience 
- Collaborative Research Center "Control of Self-Organizing Nonlinear Systems" 
- SysMedAlcoholism: Alcohol Addiction: A Systems-Oriented Approach 
- Science of Intelligence (DFG Research Cluster) 
- Collaborative Research Center "Mechanisms and Disturbances in Memory Consolidation" 
- SMARTSTART - training program in computational neuroscience