direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Neural Information Processing Project

Lupe

During this course, participants work on a scientific project of limited scope under the supervision of an experienced researcher. Project topics vary every semester, but are always related to the current research projects of the Neural Information Processing Group. In the past, topics were selected from research fields including the modeling of neural systems, machine learning, artificial neural network and their application, and from the analysis of neural data. During the course, students will read original publications, learn how to prepare and present a brief project proposal, learn how to scientifically address a complex problem, learn how to discuss and defend their findings during a scientific poster session, and a how to compile their results in form of a typical conference paper. This course also includes a seminar part.

Details will be presented at our first meeting, Monday the 24st of April 2017, at 12:15 in MAR 5.060. For further question, please contact .

Project participation

All projects have been assigned in our last meeting. If you were not present and wish to know which project you were assigned to, please send an email to .

Organization slides (PDF, 523,4 KB)

Uncertainty Quantification in Neural Networks

Lupe

Supervisor:

Deep neural networks (DNNs) can tackle a wide range of problems, from brain tumour detection, to realtime speech translation and bankruptcy prediction. However, in their basic form, DNNs provide only deterministic outputs without estimates of uncertainty. Measures of confidence are of much help in many circumstances, for instance to determine the degree of certainty that a patient has a tumour. In this project, we will evaluate different methods for uncertainty quantification in DNNs.

Slides (PDF, 3,4 MB)

Student Paper (PDF, 1,2 MB)

Implementation and optimization in machine learning

Lupe

Supervisor:

Machine learning techniques like neural networks and kernel methods suffer a particular "curse of dimensionality": integrals over learned function values scale exponentially with the number of input dimensions. These integrals are essential in applications like reinforcement learning and Bayesian inference, though. We have developed a promising algorithm for regression that does not suffer the curse, but our Matlab implementation is slow when compared with other state of the art approaches. The student will implement the algorithm in C++ and compare runtime with standard approaches like Gaussian processes in Matlab.

The project provides great insight in machine learning techniques and code optimization.

Slides (PDF, 1,8 MB)

Student Poster (PDF, 1,9 MB)

Student Paper (PDF, 2,1 MB)

Deep reinforcement learning

Lupe

Supervisor:

Recent years have seen tremendous advances of reinforcement learning (RL) in simulated games due to the application of deep neural networks. The goal of this project is to first solve a classical task in an abstract state space (from the OpenAI gym) and then to extend the approach to images as inputs. The student(s) will implement a basic deep Q-learning network (DQN) algorithm (Mnih et al., 2013, 2015), using the Python interface of TensorFlow or a comparable framework. Programming skills in Python and a background knowledge in neural networks and/or reinforcement learning (e.g. Machine Intelligence 1) are strongly encouraged.

Slides (PDF, 449,1 KB)

Evaluation of the deformation algorithms for MRI registration

Lupe

Supervisor:

In this project you will get familiar with the state of the art algorithms used for MRI registration, which is an important part of neural image analysis. A quantitative comparison of the current available methods will be done. A guideline for applying these methods will be generated based on the results.

Slides (PDF, 218,6 KB)

Convolutional Autoencoder

Lupe

Supervisor:

We want to train a neural network to classify images. Before we do that, an Autoencoder is trained for the network to pertain information of its input. The weights obtained from training the autoencoder are used for initializing a neural network for image classification. It has been shown that this pre-training of the network allows for obtaining higher generalization performance than when starting from a random weight initialization. This project will be about using a convolutional architecture for the Autoencoder that is well suited for visual data in obtaining said improved weight initialization. This project requires strong python skills and assumes good knowledge of machine learning algorithms such as gradient descent and back-propagation.

Slides (PDF, 904,5 KB)

Student Paper (PDF, 3,9 MB)

Forget Autoencoders

Lupe

Supervisor:

Autoencoders describe a neural network that is trained to reproduce its input (e.g. image) after applying multiple successive transformations to a higher (or lower) dimensional space. They have been used to pre-train neural networks before attempting to train them on machine learning tasks such as image classification. This project investigates if the representation (e.g. weights) needs to be kept close to the autoencoder representation. In other words, when we move to the classification task, how much can we "forget" from what we learned before? This project requires strong python skills and assumes good knowledge of machine learning algorithms such as gradient descent and back-propagation.

Slides (PDF, 275,4 KB)

Student Paper (PDF, 1,2 MB)

Signal decoding from EEG signal

Lupe

Supervisor:

In this project, you will be provided with a dataset of Electroencephalography signals recorded during a task. You will have to extract and select features out of these signals and use them to classify the diverse trials.

Slides (PDF, 1,5 MB)

Student Paper (PDF, 1,3 MB)

Dimensionality reduction of brain microcircuits

Lupe

Supervisor:

In this project, we study the behavior of small networks, recorded in monkey's visual areas V1 and V4. The dimensionality of high-dimensional data is reduced with Principal Component Analysis and the aim is to determine if different experimental conditions are linearly separable in the PCA space. We search for separability of 2 types of stimuli, as well as of two types of behavioral choices. Required background: MI course or another machine learning course.

Slides (PDF, 7,0 MB)

Penalized Regression Model Feature Selection

Lupe

Supervisor:

In this project the students will apply sparse regression algorithms (specifically group sparse regression) as feature selection for medical data like gene expressions.

Slides (PDF, 583,9 KB)

Student Paper (PDF, 590,4 KB)

Zusatzinformationen / Extras

Quick Access:

Schnellnavigation zur Seite über Nummerneingabe

Auxiliary Functions