TU Berlin

Neural Information ProcessingLearning on Structured Representations

Neuronale Informationsverarbeitung

Page Content

to Navigation

Learning on Structured Representations

Lupe

Learning from examples in order to predict is one of the standard tasks in machine learning. Many techniques have been developed to solve classification and regression problems, but by far, most of them were specifically designed for vectorial data. Vectorial data are very convenient because of the structure imposed by the Euclidean metric. For many data sets (protein sequences, text, images, videos, chemical formulas, etc.) a vector-based description is not only inconvenient but may simply wrong, and representations that consider relationships between objects or that embed objects in spaces with non-Euclidean structure are often more appropriate. Here we follow different approaches to extend learning from examples to non-vectorial data. One approach is focussed on an extension of kernel methods leading to learning algorithms specifically designed for relational data representations of a general form. In a second approach - specifically designed for objects which are naturally represented in terms of finite combinatorial structures - we explore embeddings into quotient spaces of a Euclidean vector space ("structure spaces"). In a third approach we consider representations of in spaces with data adapted geometries, i.e. using Riemannian manifolds as models for data spaces. In this context we are also interested in active learning schemes which are based on geometrical concepts. The developed algorithms have been applied to various applications domains, including bio- and chemoinformatics (cf. "Research" page "Applications to Problems in Bio- and Chemoinformatics") and the analysis of multimodal neural data (cf. "Research" page "MRI, EM, Autoradiography, and Multi-modal Data").



Acknowledgement: This work was funded by the BMWA and by the Technical University of Berlin.

Software:

The Potential Support Vector Machine (P-SVM)

Selected Publications:

Representation Change in Model-Agnostic Meta-Learning
Citation key Goerttler2022
Author Goerttler, T. and Müller, L. and Obermayer, K.
Year 2022
Journal ICLR Blog Track
Abstract Last year, an exciting adaptation of one of the most popular optimization-based meta-learning approaches, model-agnostic meta-learning (MAML) [Finn et al., 2017], was proposed in ▶ Jaehoon Oh, Hyungjun Yoo, ChangHwan Kim, Se-Young Yun (ICLR, 2021) BOIL: Towards Representation Change for Few-shot Learning The authors adapt MAML by freezing the last layer to force body only inner learning (BOIL). Interestingly, this is complementary to ANIL (almost no inner loop) proposed in ▶ Aniruddh Raghu, Maithra Raghu, Samy Bengio, Oriol Vinyals (ICLR, 2020) Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML Both papers attempt to understand the success of MAML and improve it. Oh et al. [2021] compare BOIL, ANIL, and MAML and show that both improve the performance of MAML. Albeit, BOIL outperforms ANIL, especially when the task distribution varies between training and testing.
Bibtex Type of Publication Selected:structured selected:main selected:quantify
Link to original publication Download Bibtex entry

Navigation

Quick Access

Schnellnavigation zur Seite über Nummerneingabe