direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Page Content

Deep Networks

Deep neural networks are very successful in many application areas. Nevertheless, it is unclear so far why they are so successful. Especially the success of overparameterized neural networks contradicts the findings of statistical learning theory. With the help of the analysis of representations, we try to reveal new insights. We are interested in the following questions.

  • Are (visual) tasks "related" and can we quantify the "closeness" of tasks?
  • What are the contributions of data set (input statistics) vs. task demand (input-output statistics)?
  • How can we efficiently mine these relationships?
  • Does the concept of an intermediate-level representation help?
  • Are there universal representations for (visual) data, which allow for an efficient solution for many "everyday" tasks?

Currently, we meet every Thursday at 2 pm to discuss these issues and gain new insights. If you are interested, don't hesitate to get in touch with .

Selected Publications:

Goerttler, T., Müller, L. and Obermayer, K. (2022). Representation Change in Model-Agnostic Meta-Learning. ICLR Blog Track


Goerttler, T. and Obermayer, K. (2021). Exploring the Similarity of Representations in Model-Agnostic Meta-Learning. Learning to Learn workshop at ICLR 2021


Müller, L., Ploner, M., Goerttler, T. and Obermayer, K. (2021). An Interactive Introduction to Model-Agnostic Meta-Learning. Workshop on Visualization for AI Explainability at IEEE VIS


Zusatzinformationen / Extras

Quick Access:

Schnellnavigation zur Seite über Nummerneingabe

Auxiliary Functions