direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Neural Information Processing Project

Lupe

During this course, participants work on a scientific project of limited scope under the supervision of an experienced researcher. Project topics vary every semester, but are always related to the current research projects of the Neural Information Processing Group. In the past, topics were selected from research fields including the modelling of neural systems, machine learning, artificial neural network and their application, and from the analysis of neural data. During the course, students will read original publications, learn how to prepare and present a brief project proposal, learn how to scientifically address a complex problem, learn how to discuss and defend their findings during a scientific poster session, and a how to compile their results in form of a typical conference paper. This course may also include a seminar part.

Details on the timeline and registration for the summer term 2022 will be made available on ISIS (at the start of the semester). For further question, please contact .

Below are descriptions of projects from previous semester. A list of project titles and supervisors can be found below. Project details for the upcoming semester will be announced throughout the next weeks through ISIS. 

 

 

Projects

  1. Inspecting representation similarity measures by visualization - supervisor: Thomas Goerttler
  2. Illumination of the relationship of fairness criteria in machine learning - supervisor: Thomas Goerttler
  3. Transferability - Visualization and analysis of encodings - supervisor: Thomas Goerttler
  4. Blog Track - a different way to discuss research - supervisor: Thomas Goerttler
  5. Homogeneous and heterogeneous models of whole-brain dynamics - supervisor: Cristiana Dimulescu/Christoph Metzner
  6. Testing Psychophysical Hypothesis by Simulating Realistic Human Scanpaths in Dynamic Real-World Scenes - supervisor: Nicolas Roth
  7. Generating Synthetic Image Data for Causal Feature Learning - supervisor: Heiner Spieß
  8. Can we beat the gradient descent algorithm in the computation of optimal control signals  - supervisor: Lena Salfenmoser
  9. Safe Exploration in Deep Reinforcement Learning - supervisor: Rong Guo
  10. Risk-Sensitive Reinforcement Learning in AI Safety Gridworlds - supervisor: Rong Guo
  11. Meta-Q-Learning in AI Safety Gridworlds - supervisor: Rong Guo
  12. Safe Reinforcement Learning via Curriculum Induction - supervisor: Rong Guo
  13. Safe Reinforcement Learning via Ensemble Learning - supervisor: Rong Guo
  14. Forward EEG modelling implementation in neurolib - supervsior: Nikola Jajcay
  15. Exploring dimensionality reduction to latent space of fMRI data using variational autoencoders - supervisor: Nikola Jajcay

Projects from the last years

Understanding Policies of Agents in Reinforcement Learning - supervisor: Thomas Goerttler

Lupe

Deep reinforcement learning algorithms have been shown to work pretty good well in games with imperfect information. Nevertheless, it is not easy to understand which strategy learned algorithms follow. 

This project aims to build a tool that helps to understand these algorithms better by visualization the development of the policy during training.

Preferable Requirements: Foundations of Machine Learning, python, web-development

Number of students: 1-3

Visual Comparison of Model-Agnostic Meta-Learning - supervisor: Thomas Goerttler

Lupe

In the last years, Meta-learning has aroused the interest of many different research fields in machine learning. Especially model-agnostic meta-learning (https://arxiv.org/abs/1703.03400) is a fascinating approach. Many different extensions have been proposed to improve training.


This project should compare these different approaches. It is the goal to make it visually as understandable as possible. A resulting website could be base on https://distill.pub/.

Preferable Requirements: Foundations of Machine Learning, python, web-development

Number of students: 1-3

Impact of inter-individual structural connectivity variability on network dynamics - supervisor: Cristiana Dimulescu/Christoph Metzner

One approach towards investigating neural dynamics is to use a whole-brain model capable of reproducing neural states calibrated based on empirical structural and functional connectivity data (Cakan et al., 2021). While these models are constructed based on averaged connectomes, it is unclear how inter-individual variability is reflected in terms of model dynamics. Furthermore, while healthy individuals might be relatively homogeneous in terms of brain connectivity, various pathologies, such as schizophrenia, fundamentally alter both the structure, as well as the dynamics of the brain. Therefore, in this project, we aim to explore the impact of such inter-individual differences on network dynamics in both healthy and schizophrenic individuals. To this end, a neural mass model will be fitted to individual connectomes from both participant samples and potential parameter differences between the two will be explored.

Good Python programming skills are required. 

References

  1. Cakan, C., Dimulescu, C., Khakimova, L., Obst, D., Flöel, A., & Obermayer, K. (2020). A deep sleep model of the human brain: how slow waves emerge due to adaptation and are guided by the connectome. arXiv preprint arXiv:2011.14731.

Number of students: 2

The influence of parcellation scheme choice on network dynamics - supervisor: Cristiana Dimulescu/Christoph Metzner

Whole-brain models allow for the simulation of large-scale neural dynamics. In this approach, the brain has to be subdivided into a certain number of regions according to a predefined parcellation scheme. Nevertheless, no standard has been established for this scheme and it is unclear what impact this choice has on simulated neural dynamics, more specifically, whether the differences are quantitative or qualitative. In this project, we aim to investigate how a parcellation scheme at three different resolutions (with 100, 200, and 1000 regions (Schaefer et al., 2017)) affects dynamics of a neural mass model.To this end, we will use structural connectomes and functional connectivity matrices from older adults. The analysis will focus on systematically exploring state space differences at the three parcellation resolutions. 

Good Python programming skills are required. 

References

  1. Cakan, C., Dimulescu, C., Khakimova, L., Obst, D., Flöel, A., & Obermayer, K. (2020). A deep sleep model of the human brain: how slow waves emerge due to adaptation and are guided by the connectome. arXiv preprint arXiv:2011.14731.

  2. Schaefer, A., Kong, R., Gordon, E. M., Laumann, T. O., Zuo, X.-N., Holmes, A. J., et al. (2017). Local-Global Parcellation of the Human Cerebral Cortex from Intrinsic Functional Connectivity MRI. Cerebral Cortex, 1–20.

Number of students: 2

A Hidden Markov Model for state-switching in functional neuroimaging data - supervisor: Cristiana Dimulescu/Christoph Metzner

Complex human behaviour emerges from dynamic patterns of neural activity that transiently synchronize between distributed brain networks. This project aims to model the dynamics of neural activity in individuals with schizophrenia and to investigate whether the attributes of these dynamics associate with the disorder's behavioural and cognitive deficits. Hidden Markov Model (HMM) will be fit to experimental data from resting-state MEG and/or fMRI activity for patients with schizophrenia and healthy control participants and the proportion of time spent in each state and the mean length of visits to each state will be compared between groups, and an analysis of the associations between these state descriptors and symptom severity will be performed.

Good Python programming skills are required

Literature: Kottaram, A., Johnston, L. A., Cocchi, L., Ganella, E. P., Everall, I., Pantelis, C., ... & Zalesky, A. (2019). Brain network dynamics in schizophrenia: Reduced dynamism of the default mode network. Human brain mapping, 40(7), 2212-2228.Number of students: 2

Exploring neural representations in deep neural networks trained with local error signals - supervisor: Heiner Spieß

Current Deep Neural Networks are trained by the Backpropagation algorithm, which updates the parameters simultaneously and globally. As artificial neural networks have their roots intellectually in biological networks, many people raise concerns about Backpropagation being unnatural. Many research hypothesized that it is more plausible that biological neural networks could implement some sort of local approximation to Backpropagation. Various learning algorithms that are biologically more plausible have been introduced, e.g. (Direct) Feedback Alignment or Target Propagation (see [1] and references therein). Even layer-wise pre-training like in stacked Autoencoders, which were in the past almost needed to train relatively Deep Neural Networks successfully, shows some resemblance to these biologically-motivated local learning algorithms.


In this project, these local learning algorithms should be explored by comparing the learned networks to traditionally trained ones, i.e. via Backpropagation.

This comparison might utilize techniques like:

- Representational Similarity Analysis (or similar methods) ([2])

- Intrinsic Dimensionality of the learned representations (i.e. hidden activations of a set of samples) ([3, 4])

- Dimensionality Reduction / Embedding Algorithms

- Transferability to new tasks


Most probably, Deep Neural Networks trained on images will be studied. It is possible that "Invertible Neural Networks" (e.g. [5, 6]) will be used as they should simplify many of those local learning algorithms.


The methodology of that project is kept vague on purpose. The focus on methods and techniques will be decided together with the participants.

Due to resource limitations, the research can not be carried out on servers of the lab. Free resources should be utilized (e.g. Google Colab).


[1] T. P. Lillicrap, A. Santoro, L. Marris, C. J. Akerman, and G. Hinton, “Backpropagation and the brain,” Nature Reviews Neuroscience, vol. 21, no. 6, Art. no. 6, Jun. 2020, doi: 10.1038/s41583-020-0277-3.


[2] N. Kriegeskorte, M. Mur, and P. Bandettini, “Representational Similarity Analysis – Connecting the Branches of Systems Neuroscience,” Front Syst Neurosci, vol. 2, Nov. 2008, doi: 10.3389/neuro.06.004.2008.


[3] A. Ansuini, A. Laio, J. H. Macke, and D. Zoccolan, “Intrinsic dimension of data representations in deep neural networks,” in Advances in neural information processing systems, 2019, vol. 32, [Online]. Available: https://proceedings.neurips.cc/paper/2019/file/cfcce0621b49c983991ead4c3d4d3b6b-Paper.pdf.


[4] V. Erba, M. Gherardi, and P. Rotondo, “Intrinsic dimension estimation for locally undersampled data,” Scientific Reports, vol. 9, no. 1, Art. no. 1, Nov. 2019, doi: 10.1038/s41598-019-53549-9.


[5] L. Ardizzone, J. Kruse, C. Rother, and U. Köthe, “Analyzing Inverse Problems with Invertible Neural Networks,” presented at the International Conference on Learning Representations, Sep. 2018, Accessed: Mar. 05, 2021. [Online]. Available: https://openreview.net/forum?id=rJed6j0cKX.


[6] L. Dinh, J. Sohl-Dickstein, and S. Bengio, “Density estimation using Real NVP,” arXiv:1605.08803 [cs, stat], Feb. 2017, Accessed: Mar. 05, 2021. [Online]. Available: http://arxiv.org/abs/1605.08803.

Non-linear optimal control of neural populations - supervisor: Lena Salfenmoser

There is a great scientific interest in understanding the functioning of the human brain, and modeling neuronal dynamics can enable valuable insights. Various computational models try to capture neuronal activity at different scales in time and space, and at different levels of precision. We investigate a physically meaningful model that incorporates important features such as adaptation and neuronal excitation and inhibition. Depending on the parameters of the model, the neuronal activity can show different behaviors, such as static states or oscillations, in accordance with real human brain activity.

Mathematically, a high-dimensional system of differential equations characterizes the model dynamics. However, nonlinear dynamics and differential equations do not have to be investigated and studied in this project (if not desired).

For this model, an algorithm has been developed that computes how to drive the neuronal activity into any desired state in the most efficient way. This is referred to as optimal control.

 

In this project, there are two possible kinds of tasks:

A.) OPTIMIZATION OF GRADIENT DESCENT

The optimal control algorithm is iterative, and each step comprises two parts. The first part is a mathematically complicated computation of the so-called adjoint state. The second part is a gradient descent. This second part of the algorithm allows for various methods to improve efficiency and ensure convergence. After familiarizing with the topic and the algorithm, the student’s tasks are:

1. Research on different options how to perform a gradient descent

2. Implementation of descent methods

3. Evaluation and comparison of efficiency and convergence for different parameters

4. Investigation of possibility to dynamically chose the best option within the iterative computation

 

B.) INVESTIGATION OF CONTROL STRATEGIES FOR DIFFERENT CONTROL TASKS

The control algorithm is used to investigate optimal control strategies for steering the neuronal activity from various initial to various final states. The rich dynamics and the high number of parameters of the model allow for a huge number of scenarios that have to be investigated. Furthermore, different notions of optimality (time-efficiency, energy-efficiency, and sparseness) and different initial conditions can yield differing solutions for the control strategies. After familiarizing with the topic and the algorithm, the student’s tasks are:

1. Investigation and definition of parameter sets and control tasks of interest (sensible and realistic limiting is essential)

2. Computation of optimal control strategies

3. Investigation of the impact of time-efficiency, energy-efficiency, and sparseness

4. Investigation of the impact of initial conditions

 

 

REQUIREMENTS:

* Good programming skills (Python)

* Interest in nonlinear dynamics, optimal control, and analytical methods of optimization could be helpful and allow for a broader range of tasks and responsibilities.

Number of students: 1-2

Sentimental analysis for predicting the cryptocurrency market - supervisor: Vaios Laschos

It is apparent that when influential people like Elon Musk, make statements in twitter, this can affect different markets a lot (Game-stop, Dodgecoin, Tesla, Bitcoin). Also the number of tweets and the language used in them, even fro non-influential individuals, can work as an indicator of how things are going to move in the immediate future. The machine learning community, already tries to understand how much of this information exchanged in the social media, can help to predict the movement of various markets (see arxiv.org/abs/1010.3003 and cs229.stanford.edu/proj2015/029_report.pdf). Purpose of this project is to explore this connection between your favorite social media and some market (like crypotcurrency market).

Number of students : 2-4

Comparing a new GAN algorithm with the established one - supervisor: Vaios Laschos

Generative adversarial Networks became really popular the last few years. A special boost was given, when more "cryptic" mathematical tools like the Wasser- stein distance was introduced in the area. The Wasserstein distance is one among many of Optimal transport distances that can help one train generative networks. However the predominant algorithm cannot accommodate the other transport distances. In our group, we introduced an alternative algorithm that can work with any optimal transport distance (see arxiv.org/abs/1910.00535). The scope of this project is to compare the two algorithms, using the most novel metrics in order and report their corresponding strengths and weaknesses.

Remark: This project is designed to be continued as a Master Thesis, with a very high probability for publication. Processing power  will be provided for that project.

Number of students : 1-4

Flatland challenge - supervisor: Vaios Laschos

Researching efficient and automated solutions for the vehicle rescheduling problem (VRSP) is still a major focus of logistics companies such as Deutsche Bahn or Swiss Federal Railways. To study VRSP, an environment has been developed by Mohanty, Sharada, et al (2020) (https://arxiv.org/abs/2012.05893), that provides an easy-to-use interface to conduct experiments, possibly using novel approaches from Reinforcement Learning or Imitation Learning. At NeurIPS 2020, a large community of researches engaged working on this problem using the proposed "Flatland"-environment. The goal of this project, is to tackle the efficient management of dense traffic on (complex) railway networks using this environment, with the possibility to participate in the "AMLD 2021 Flatland Challenge" (https://flatland.aicrowd.com/intro.html).

Number of students : 1-4

Using Machine learning for predictions on the spreading of Corona - supervisor: Vaios Laschos

Covid19 has been ubiquitous for more than a year now and become part of our daily lives. Nevertheless, there are hardly any reliable methods available that provide information about the current infection prevalence in the population let alone that allow predictions for the near future. A group of Facebook scientists recently presented an auto-regressive model in combination with recurrent neural networks that permits precise predictions for the USA over a period of 14 days (https://dataforgood.fb.com/tools/covid-19-forecasts/). Input to the model are not only the typical epidemiological metrics but also numerous open data sources such as mobility and weather. Hence, the focus on data and machine learning is crucial to the success of Facebook's model. In this project, we will replicate the study to generate predictions for Denmark, Germany and other countries for which we have sufficient data and provide visual forecasts to the public.

Remark: If someone is interested in the topic but finds this approach too technical, there is the alternative to also study the spreading of Corona using sentimental analysis (see github.com/echen102/COVID-19-TweetIDs). We are also open to every other suggestion. This project can lead to a Master Thesis. 

Number of students : 1-4

Reinforcement learning and Meta-learning in Board games - supervisor: Vaios Laschos

The aim of this project is to learn and use state of the art reinforcement learning techniques in order to create agents that outperform humans in various board/computer games. Ideally we would like the agent to be able to adapt to their partner's playstyle when they engage in team games.

 

Number of students: open

Developing a Mandarin or Spanish word embedding space for modeling human brain responses during language comprehension - supervisor: Fatma Deniz

In natural language processing (NLP), word embeddings are vector representations of words that encode the meaning of words with the property that words with similar meaning are close in the vector space. Different methods exist to generate context-less (e.g. word2vec, GloVe) or context-dependent (e.g. BERT, ELMo) vector representations of words that have been shown to achieve state of the art results in downstream NLP tasks. However, the majority of word embeddings are language dependent (and mostly English-centric).

As part of a larger research question on how the human brain processes more than one language, in this project students will be asked to create a lexical co-occurrence model as a word embedding space using Mandarin (or Spanish) online text corpora [Turnery and Pantel 2010]. Different evaluation metrics will be applied and compared. Similar to previous work [Huth et al. 2016; Deniz et al. 2019], the newly created embedding spaces will be used in modeling human brain data during language comprehension.

Mandarin or Spanish knowledge will be helpful!

 

 

Deniz, F., Nunez-Elizalde, A.O., Huth, A.G. and Gallant, J.L., 2019. The representation of semantic information across human cerebral cortex during listening versus reading is invariant to stimulus modality. Journal of Neuroscience, 39(39), pp.7722-7736.

 

Huth, A.G., De Heer, W.A., Griffiths, T.L., Theunissen, F.E. and Gallant, J.L., 2016. Natural speech reveals the semantic maps that tile human cerebral cortex. Nature, 532(7600), pp.453-458.

 

Turney, P. D. & Pantel, P, 2010. From frequency to meaning: vector space models of semantics. J. Artif. Intell. Res. 37, 141–188

 

Number of students: 2-3

Predicting human brain responses during language comprehension using artificial neural networks - supervsior: Fatma Deniz

How the human brain processes linguistic features when reading a story is still an unresolved question. Modern advancements in natural language processing can facilitate our understanding of language processing in the human brain. In this project, students will use a battery of deep learning based natural language processing models to extract features (word embedding representations) from text. These features will then be used to find a mapping to human brain responses collected while participants were reading a text [Deniz et al. 2019]. Different model architectures will be compared based on their predictions of human brain responses.

Replication of a study; NLP knowledge required!

Deniz, F., Nunez-Elizalde, A.O., Huth, A.G. and Gallant, J.L., 2019. The representation of semantic information across human cerebral cortex during listening versus reading is invariant to stimulus modality. Journal of Neuroscience, 39(39), pp.7722-7736.

Number of students: 2

The generation of two-tone Mooney images using deep learning - Supervisor: Fatma Deniz/Heiner Spiess

Mooney images are degraded two-tone (black-and-white) images where a hidden object in the image is not immediately interpretable but after a while human observers can identify it with ease. How such poor information can lead to object recognition is an important question in cognitive neuroscience and machine learning.

Using a thresholding method we have previously created a database of two-tone Mooney images as described in [Imamoglu et al. 2012, 2013]. However, using recent advancements in deep learning based methods (e.g. generative adversarial network models [Ke et al. 2017]) we could advance the generation of Mooney images and thereby facilitate further research in object recognition in humans and machines. In this project students will research different deep learning based neural network architectures with the aim to generate an experimentally valid two-tone Mooney image database, acquire human behavioral responses and compare these to the results of images generated through different baseline models.

 

Imamoglu, F., Kahnt, T., Koch, C. and Haynes, J.D., 2012. Changes in functional connectivity support conscious object recognition. Neuroimage, 63(4), pp.1909-1917.

 

Imamoglu, F., Koch, C. and Haynes, J.D., 2013. MoonBase: Generating a database of two-tone Mooney images. Journal of Vision, 13(9), pp.50-50.

 

Tsung-Wei Ke, Stella X. Yu, and David Whitney. "Mooney Face Classification and Prediction by Learning across Tone", in ICIP2017

Number of students: 3-4

Multistage attention Pooling for Short Duration Sound Event Detection - supervisor: Wale Adewusi

Sound event detection involves marking of temporal stamps for multiple instances of sound events within the same test audio sample and eventual classification of these instances.

End to end modeling with Convolutional Neural Networks (CNNs) have been widely used in modeling sound event detection tasks owing to its efficiency to exploit the invariant features in the audio events.

Usually, the last representation for this task in CNNs either makes use of temporal pooling (average activation across the temporal region) or the max pooling (maximum activation) to accumulate the temporal features as global representation for event classification. While average pooling can effectively aggregate features for long duration events, it is characterized with deletion errors and interference. On the other hand, max pooling can aggregate features for short duration events but with introduction of insertion loss into the results. These pooling functions tend to introduce confusion into events classification. Furthermore, sound event labels provide holistic description at semantic level, as such the determination of the temporal region and the features that influence the event detection become an ill poised task.

Then, the main thrust of this work is to model a two-stage attention pooling algorithm that can accumulate local temporal contexts by incorporating a global and local attention modules using a convolution recurrent neural networks to harness temporal local features for discriminative feature extraction and modeling based on the approach in (Xugang et.al 2018: Temporal attention pooling for audio event detection) for the detection task.

The model will be evaluated on strongly labeled publicly available audio event datasets while strength of this multistage attention module to aggregate enough context for short duration events detection will be assessed.


Number of students: 2

Projects from previous semesters

Explaining the human visual brain challenge with deep learning models

Lupe

Supervisor:

 

Understanding how the human brain works is one of the greatest challenges that science faces today. Given a set of images consisting of everyday objects and corresponding brain activity recorded while human subjects viewed those images, we will devise computational models(deep learning) which predict brain activity, which will be used to predict the brain activity for a brand new set of images

Meta reinforcement learning in higher-order bandit tasks

Supervisor:

 

Instead of solving single particular MDP, MetaRL agent learns in various environments (we'll create higher-order bandit tasks) in order to obtain general knowledge about them.

Creating agents to play Mahjong using deep reinforcement learning

Supervisor:  and

 

Multi-agent reinforcement learning in imperfect information games.

Using Reinforcement Learning to optimize windturbine usage

Supervisor:

 

Using Reinforcement Learning in a windturbine simulator to optimize windturbine usage

Creating agents that play the Hanabi game based on evolutionary algorithms

Supervisor:

 

We use population based training, to create a pool of rule-based agents that can play the game of Hanabi.

Time sensitive prediction of attention allocation

Supervisor: 

Image saliency is a well researched topic in computer vision. In this project we want to tackle the harder problem of predicting not only where humans look on average, but their actual scanpaths on naturalistic stimuli. Methods depend on the group's preknowledge and can range from information maximization or statistical modelling to deep learning.

Decoding activity of neural networks with unsupervised learning

Supervisor: Veronika Koren

 

We use recently developped method for decoding from the activity of biological neural networks, but apply unsupervised learning instead of supervised learning.


Use DQNN to create a teacher in the game of GO

Supervisor:

 

By studying the current implementations for solving the game of Go, we will try to design an agent that wins the game but in close score. This agent can act as a teacher.

Training machine/deep learning methods to identify mechanistic models of neural dynamics

Supervisor: 

 

We use machine/deep learning models (Gaussian process surrogate optimization and deep neural density estimators) to infer model parameters of biologically realistic computational neuroscience models

Excitatory-inhibitory interactions in the auditory cortex in schizophrenia

Supervisor: 

 

Replication of an existing model of E-I interactions in auditory cortex; Extending it to include aberrant E-I interactions in schizophrenia

Tackeling mode collapse by having a flexible prior distribution of the GAN

Lupe

Supervisor:

After having been proposed in 2014, generative adversarial nets (GANs) have shown their potential in various tasks like image generation, 3D object generation, image super-resolution and video prediction. Nevertheless, they are considered as highly unstable to train and are endangered to miss modes such that the whole latent space is mapped to a single or a few modes.

In this project, the students should manipulate the prior distribution of the generator during training. The aim is to learn the different modes already in the latent space of the generator by not explicitly making the latent space multimodal. The performance done should be evaluated on toy datasets (8 modes) and MNIST. 

Complex Network Measures of Brain Connectivity and Their Relation to Behaviour

Rubinov & Sporns, 2010
Lupe

The application of graph theory in neuroscience has allowed the brain to be characterized in terms of structural and functional connectivity and has led to key insights regarding brain function. This approach, however, implies that the brain has to be subdivided in a certain number of regions, according to a predefined parcellation scheme (Rubinov & Sporns, 2010). Nevertheless, no standard has been established for this scheme and it is not well understood how its choice impacts subsequent results. In this project, we aim to compare if and how the complex network measures described in the paper above differ based on the parcellation scheme choice. To this end, we will use connectome data from healthy participants in the Human Brain Project. Furthermore, these measures will be correlated with behavioural data from the same project, such as working memory performance or sleep quality, in order to determine whether differences in structural brain connectivity are related to behavioural ones. Should time permit, the analyses will also be performed on groups of patients with neuropsychiatric disorders.

Controllability of Brain Networks

Muldoon et al. 2016
Lupe

Supervisor:

While brain stimulation has become increasingly used in clinical settings, techniques to optimize stimulation parameters are still relatively scarce. Recent studies apply methods from network control theory to predict the effect stimulation of certain regions has on brain dynamics, locally and globally (Muldoon et al., Stimulation-Based Control of Dynamic Brain Networks, PloS Comput Biol 12(9), 2016, URL). In this project, we aim to replicate the results from the above study, using connectome data from the Human Connectome Project from healthy human subjects. If time permits, subgroup analyses can be performed to see whether the results dependent on factors such as e.g. age or sex. Additionally, the analyses can also be performed on groups of patients with neurological or psychiatric disorders, again if time permits.

Random cross-embedding on neural spike trains

Supervisor:

We work with spiking neural networks of smallish size from monkey's brain. The network is spatially organized in 3 layers, deep, middle and the superfical layer. We are interested in how do the three layers interact with each other. This is easily done by computing the correlation between neural signals across the pairs of layers. However, the correlation is a bidirectional measure that does not tell us in which direction does the influence flow. We therefore use a nonlinear method of cross-mapping, that allows to determine the directionality of interaction between the two layers. The method uses an embedding in a high-dimensional space and computation of nearest neighbord in that space. Let's say we have the signal from layer 1 and we call it X(t) and from layer 2, we call it Y(t). We compute an embedding for each of the signals, Mx and My. We now use the information from Mx to reconstruct My and vice-versa, use the information from My to reconstruct Mx. The method is not super simple, but not too hard to implement either. I have already implemented it and I can help if you get stuck. The method is generic and can be used on many types of data, e.g., here there is an application on face recognition: 

Roweis, Sam T., and Lawrence K. Saul. "Nonlinear dimensionality reduction by locally linear embedding." science 290.5500 (2000): 2323-2326.

You can also check:

Saul, Lawrence K., and Sam T. Roweis. "Think globally, fit locally: unsupervised learning of low dimensional manifolds." Journal of machine learning research 4.Jun (2003): 119-155.

Computational modelling of separable resonant circuits controlled by different interneuron types

Vierling-Claassen et al. 2010
Lupe

Supervisor: Christoph Metzner

Cortical interneurons show a remarkable morphological, anatomical and electrophysiological diversity, however, we are just beginning to understand how this diversity translates to functional differences. While it is well established that fast-spiking, parvalbumin-positive interneurons are crucially involved in the generation of fast cortical oscillations, the circuitry controling slower cortical oscillations remains elusive. Through computational modeling, Vierling-Claassen et al. (Vierling-Claassen et al.: Computational modeling of distinct neocortical oscillations driven by cell-type selective optogenetic drive: separable resonant circuits controlled by low-threshold spiking and fast-spiking interneurons. Frontiers in Human Neuroscience 2010, 4, 198) have hypothesized that low-threshold-spiking, somatostatin-positive interneurons control low-frequency oscillatory activity in cortical circuits. In this project, we will replicate the computational model the above mentioned study using a novel simulation tool (NetPyNE) that allows for automatic parallelization of simulations and therefore, for significant speed up. Furthermore, we will replace one of the single cell models with a newly developed model that allows for the integration of data on genetic variants found in schizophrenic patients. This will in turn enable the exploration of genetic mechanisms underlying oscillatory deficits in patients with schizophrenia.

 

 

Solving games with RL (Montezuma’s Revenge, Hanabi)

BCI Software
Lupe

Supervisor:

Reinforcement learning is an area of machine learning concerned with how artificial agents have to act in an environment so as to maximize some notion of cumulative reward.  Reinforcement learning lives in the intersection of many disciplines (game theory, control theory, operations research, information theory, simulation-based optimization etc) and it has many application in real life (economics, robotics, etc). Video games and board games provide useful testbeds for creating and improving reinforcement learning algorithms, because they have sufficient complexity for helping us understand basic principles, but at the same time the complexity is not that high that makes analysis "impossible" (like most real life problems). The goal of this project is to make artificial agents that play games where either the reward is sparse or the state of the environment is not fully accessible to the agent.

RSA vs. performance

Supervisor: Youssef Kashef

In this project we will look at the correlation of representation similarity analysis and performance of different neural networks trained on classification tasks.

Beyond Relative Entropy: Training GANs with alternative error functions

Supervisor:

In the framework of Generative Adversarial Networks (GANs), couples of interacting networks are studied: a "generative" network is trained to produce new samples from the distribution of the training data while a "discriminative" network is trained to decide whether or not a sample has been drawn from the training data distribution. At the end of the training phase, the discriminator is not able to distinguish between training data and newly generated samples anymore. This procedure yields a network to generate new samples of very complex objects such as natural images from unstructured input including the case of images with added noise. In the traditional approach, the relative entropy function is used to quantify the distance between the learnt distribution and the distribution of the data during the learning phase. In a more recent approach, the standard error function is substituted by the Wasserstein-1 distance  with superior results. The main goal of this project is to replicate the classical results by training GANs with both the relative entropy as benchmark error, and alternative distances. Finally, the training results are compared and analysed.

Forget Autoencoders

Lupe

Supervisor:

Autoencoders describe a neural network that is trained to reproduce its input (e.g. image) after applying multiple successive transformations to a higher (or lower) dimensional space. They have been used to pre-train neural networks before attempting to train them on machine learning tasks such as image classification. This project investigates if the representation (e.g. weights) needs to be kept close to the autoencoder representation. In other words, when we move to the classification task, how much can we "forget" from what we learned before? This project requires strong python skills and assumes good knowledge of machine learning algorithms such as gradient descent and back-propagation.

Slides

Student Paper

Dimensionality reduction of brain microcircuits

Lupe

Supervisor:

In this project, we study the behavior of small networks, recorded in monkey's visual areas V1 and V4. The dimensionality of high-dimensional data is reduced with Principal Component Analysis and the aim is to determine if different experimental conditions are linearly separable in the PCA space. We search for separability of 2 types of stimuli, as well as of two types of behavioral choices. Required background: MI course or another machine learning course.

Slides

Zusatzinformationen / Extras

Quick Access:

Schnellnavigation zur Seite über Nummerneingabe

Auxiliary Functions