Inhalt des Dokuments
Deep Networks
Deep neural networks are very successful in many application areas. Nevertheless, it is unclear so far why they are so successful. Especially the success of overparameterized neural networks contradicts the findings of statistical learning theory. With the help of the analysis of representations, we try to reveal new insights. We are interested in the following questions.
- Are (visual) tasks "related" and can we quantify the "closeness" of tasks?
- What are the contributions of data set (input statistics) vs. task demand (input-output statistics)?
- How can we efficiently mine these relationships?
- Does the concept of an intermediate-level representation help?
- Are there universal representations for (visual) data, which allow for an efficient solution for many "everyday" tasks?
Currently, we meet every Thursday at 2 pm to discuss these issues and gain new insights. If you are interested, don't hesitate to get in touch with deep.networks@ni.tu-berlin.de.
Selected Publications:
Citation key | Mueller20210 |
---|---|
Author | Müller, L. and Ploner, M. and Goerttler, T. and Obermayer, K. |
Year | 2021 |
Journal | Workshop on Visualization for AI Explainability at IEEE VIS |
Abstract | In this article, we give an interactive introduction to model-agnostic meta-learning (MAML), a well-establish method in the area of meta-learning. Meta-learning is a research field that attempts to equip conventional machine learning architectures with the power to gain meta-knowledge about a range of tasks to solve problems like the one above on a human level of accuracy. |
Bibtex Type of Publication | Selected:structured selected:publications selected:main selected:quantify |
Zusatzinformationen / Extras
Quick Access:
Schnellnavigation zur Seite über Nummerneingabe