direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Projekt Neuronale Informationsverarbeitung

Lupe

In diesem Kurs werden Teilnehmer ein zeitlich begrenztes Projekt aus der aktuellen Forschung des Fachgebiets Neuronale Informationsverarbeitung bearbeiten. Die Themen umfassen nicht nur künstliche Neuronale Netze, sondern auch die Modellierung und Analyse von echten Neuronalen Netzwerken, und die Anwendungen von Machinelles Lernen und künstlicher Intelligenz. Der Kurs ist als betreute Einführung in selbstständiges wissenschaftliches Arbeiten angelegt. Wärend des Projektes werden Teilnehmer zunächst unter Anleitung und später selbstständig Originalliteratur lesen; eine kurze Projektzusammenfassung verfassen und diese einem Fachpublikum präsentieren; sich mit komplexen realen Forschungsproblemen auseinandersetzen; ihre Ergebnisse in einer wissenschaftlichen Posterpräsentation verteidigen und in einem kurzen Artikel nach wissenschaftlichen Maßstäben dokumentieren. Der Kurs entält daher auch einen Seminarteil.

Zusätzliche Information finden Sie im ISIS Kurs. Einzelheiten zum Verlauf und zur Anmeldung werden während des ersten Treffens am Mittwoch 10. April 2019 10:15 im Raum MAR 5.060 besprochen. Weitere Fragen bitte an .

 

Einige Projekte für das kommende Semester sind unten beschrieben.

Differential regulation of the effects of acoustic context by cortical interneuron types

Phillips et al. 2017
Lupe

Supervisor: Christoph Metzner

Acoustic context has a large modulatory effect on both behavioural and neural responses to auditory stimuli.  One  example is the suppression of responses to subsequently presented stimuli, when they have similar spectral properties, known as forward suppression (FWS). Several candidate mechanisms mediating FWS have been proposed (e.g. short-term synaptic depression, spike-frequency adaptation and cortical inhibition), but the strength of their relative contributions is still unknown. In a recent study, Phillips et al. (Phillips et al.: Cortical Interneurons Differentially Regulate the Effects of Acoustic Context, Cell Reports 20, 771–778, 2017) demonstrate that optogenetic inactivation of somatostatin-positive interneurons weakens FWS, while suppression of a different interneuron subtype (parvalbumin-positive cells) results in strengthened FWS, suggesting different roles of these two subtypes in regulating the context dependence of auditory processing. In this project, we will replicate the computational model proposed in this study, however, with a biophysically more detailed model, that allows for an in-depth analysis of underlying mechanisms.

Tackeling mode collapse by having a flexible prior distribution of the GAN

Lupe

Supervisor:

After having been proposed in 2014, generative adversarial nets (GANs) have shown their potential in various tasks like image generation, 3D object generation, image super-resolution and video prediction. Nevertheless, they are considered as highly unstable to train and are endangered to miss modes such that the whole latent space is mapped to a single or a few modes.

In this project, the students should manipulate the prior distribution of the generator during training. The aim is to learn the different modes already in the latent space of the generator by not explicitly making the latent space multimodal. The performance done should be evaluated on toy datasets (8 modes) and MNIST. 

Complex Network Measures of Brain Connectivity and Their Relation to Behaviour

Rubinov & Sporns, 2010
Lupe

Supervisor:

The application of graph theory in neuroscience has allowed the brain to be characterized in terms of structural and functional connectivity and has led to key insights regarding brain function. This approach, however, implies that the brain has to be subdivided in a certain number of regions, according to a predefined parcellation scheme (Rubinov & Sporns, 2010). Nevertheless, no standard has been established for this scheme and it is not well understood how its choice impacts subsequent results. In this project, we aim to compare if and how the complex network measures described in the paper above differ based on the parcellation scheme choice. To this end, we will use connectome data from healthy participants in the Human Brain Project. Furthermore, these measures will be correlated with behavioural data from the same project, such as working memory performance or sleep quality, in order to determine whether differences in structural brain connectivity are related to behavioural ones. Should time permit, the analyses will also be performed on groups of patients with neuropsychiatric disorders.

Controllability of Brain Networks

Muldoon et al. 2016
Lupe

Supervisor:

While brain stimulation has become increasingly used in clinical settings, techniques to optimize stimulation parameters are still relatively scarce. Recent studies apply methods from network control theory to predict the effect stimulation of certain regions has on brain dynamics, locally and globally (Muldoon et al., Stimulation-Based Control of Dynamic Brain Networks, PloS Comput Biol 12(9), 2016, URL). In this project, we aim to replicate the results from the above study, using connectome data from the Human Connectome Project from healthy human subjects. If time permits, subgroup analyses can be performed to see whether the results dependent on factors such as e.g. age or sex. Additionally, the analyses can also be performed on groups of patients with neurological or psychiatric disorders, again if time permits.

Random cross-embedding on neural spike trains

Supervisor: Veronika Koren

We work with spiking neural networks of smallish size from monkey's brain. The network is spatially organized in 3 layers, deep, middle and the superfical layer. We are interested in how do the three layers interact with each other. This is easily done by computing the correlation between neural signals across the pairs of layers. However, the correlation is a bidirectional measure that does not tell us in which direction does the influence flow. We therefore use a nonlinear method of cross-mapping, that allows to determine the directionality of interaction between the two layers. The method uses an embedding in a high-dimensional space and computation of nearest neighbord in that space. Let's say we have the signal from layer 1 and we call it X(t) and from layer 2, we call it Y(t). We compute an embedding for each of the signals, Mx and My. We now use the information from Mx to reconstruct My and vice-versa, use the information from My to reconstruct Mx. The method is not super simple, but not too hard to implement either. I have already implemented it and I can help if you get stuck. The method is generic and can be used on many types of data, e.g., here there is an application on face recognition: 

Roweis, Sam T., and Lawrence K. Saul. "Nonlinear dimensionality reduction by locally linear embedding." science 290.5500 (2000): 2323-2326.

You can also check:

Saul, Lawrence K., and Sam T. Roweis. "Think globally, fit locally: unsupervised learning of low dimensional manifolds." Journal of machine learning research 4.Jun (2003): 119-155.

Computational modelling of separable resonant circuits controlled by different interneuron types

Vierling-Claassen et al. 2010
Lupe

Supervisor: Christoph Metzner

Cortical interneurons show a remarkable morphological, anatomical and electrophysiological diversity, however, we are just beginning to understand how this diversity translates to functional differences. While it is well established that fast-spiking, parvalbumin-positive interneurons are crucially involved in the generation of fast cortical oscillations, the circuitry controling slower cortical oscillations remains elusive. Through computational modeling, Vierling-Claassen et al. (Vierling-Claassen et al.: Computational modeling of distinct neocortical oscillations driven by cell-type selective optogenetic drive: separable resonant circuits controlled by low-threshold spiking and fast-spiking interneurons. Frontiers in Human Neuroscience 2010, 4, 198) have hypothesized that low-threshold-spiking, somatostatin-positive interneurons control low-frequency oscillatory activity in cortical circuits. In this project, we will replicate the computational model the above mentioned study using a novel simulation tool (NetPyNE) that allows for automatic parallelization of simulations and therefore, for significant speed up. Furthermore, we will replace one of the single cell models with a newly developed model that allows for the integration of data on genetic variants found in schizophrenic patients. This will in turn enable the exploration of genetic mechanisms underlying oscillatory deficits in patients with schizophrenia.

 

 

Solving games with RL (Montezuma’s Revenge, Hanabi)

BCI Software
Lupe

Supervisor:

Reinforcement learning is an area of machine learning concerned with how artificial agents have to act in an environment so as to maximize some notion of cumulative reward.  Reinforcement learning lives in the intersection of many disciplines (game theory, control theory, operations research, information theory, simulation-based optimization etc) and it has many application in real life (economics, robotics, etc). Video games and board games provide useful testbeds for creating and improving reinforcement learning algorithms, because they have sufficient complexity for helping us understand basic principles, but at the same time the complexity is not that high that makes analysis "impossible" (like most real life problems). The goal of this project is to make artificial agents that play games where either the reward is sparse or the state of the environment is not fully accessible to the agent.

RSA vs. performance

Supervisor: Youssef Kashef

In this project we will look at the correlation of representation similarity analysis and performance of different neural networks trained on classification tasks.

Projekte aus vergangenen Semestern

Beyond Relative Entropy: Training GANs with alternative error functions

Supervisor: Vaios Laschos

In the framework of Generative Adversarial Networks (GANs), couples of interacting networks are studied: a "generative" network is trained to produce new samples from the distribution of the training data while a "discriminative" network is trained to decide whether or not a sample has been drawn from the training data distribution. At the end of the training phase, the discriminator is not able to distinguish between training data and newly generated samples anymore. This procedure yields a network to generate new samples of very complex objects such as natural images from unstructured input including the case of images with added noise. In the traditional approach, the relative entropy function is used to quantify the distance between the learnt distribution and the distribution of the data during the learning phase. In a more recent approach, the standard error function is substituted by the Wasserstein-1 distance  with superior results. The main goal of this project is to replicate the classical results by training GANs with both the relative entropy as benchmark error, and alternative distances. Finally, the training results are compared and analysed.

Forget Autoencoders

Lupe

Supervisor:

Autoencoders describe a neural network that is trained to reproduce its input (e.g. image) after applying multiple successive transformations to a higher (or lower) dimensional space. They have been used to pre-train neural networks before attempting to train them on machine learning tasks such as image classification. This project investigates if the representation (e.g. weights) needs to be kept close to the autoencoder representation. In other words, when we move to the classification task, how much can we "forget" from what we learned before? This project requires strong python skills and assumes good knowledge of machine learning algorithms such as gradient descent and back-propagation.

Slides

Student Paper

Dimensionality reduction of brain microcircuits

Lupe

Supervisor:

In this project, we study the behavior of small networks, recorded in monkey's visual areas V1 and V4. The dimensionality of high-dimensional data is reduced with Principal Component Analysis and the aim is to determine if different experimental conditions are linearly separable in the PCA space. We search for separability of 2 types of stimuli, as well as of two types of behavioral choices. Required background: MI course or another machine learning course.

Slides

Zusatzinformationen / Extras

Direktzugang:

Schnellnavigation zur Seite über Nummerneingabe