direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Learning Vector Quantization and Self-organizing Maps

Self-organizing maps, often termed Kohonen maps, are a versatile and widely used tool for exploratory data analysis. Here we were interested in mathematically characterizing the embedding properties of the Self-organizing Map. We proposed robust learning schemes using deterministic annealing and we investigated extensions of the Self-organizing Map to relational data representations which included pairwise data as a special case. Emphasis was given to formulations which are based on cost-functions and optimization, and we investigated, how the different variants of the Self-organizing map relate to each other and to the original Kohonen map. We also studied prototype-based classifiers related to Learning Vector Quantization with a particular focus on improved learning schemes. Self-organizing maps were also investigated in the context of understanding self-organization and pattern formation in neural development. For details see "Research" page "Models of Neural Development".

Acknowledgement: Research was funded by the Technische Universität Berlin.

Lupe

Selected Publications:

Active Learning in Self-Organizing Maps
Citation key Hasenjaeger1999a
Author Hasenjäger, M. and Ritter, H. and Obermayer, K.
Title of Book Kohonen Maps
Pages 57–70
Year 1999
ISBN 978-0-444-50270-4
Publisher Elsevier
Abstract The self-organizing map (SOM) was originally proposed by T. Kohonen in 1982 on biological grounds and has since then become a widespread tool for explanatory data analysis. Although introduced as a heuristic, SOMs have been related to statistical methods in recent years, which led to a theoretical foundation in terms of cost functions as well as to extensions to the analysis of pairwise data, in particular of dissimilarity data. In our contribution, we first relate SOMs to probabilistic autoencoders, re-derive the SOM version for dissimilarity data, and review part of the above-mentioned work. Then we turn our attention to the fact, that dissimilarity-based algorithms scale with O($D^2$), where D denotes the number of data items, and may therefore become impractical for real-world datasets. We find that the majority of the elements of a dissimilarity matrix are redundant and that a sparsse matrix with more than 80\\\\\\\\% missing values suffices to learn a SOM representation of low cost. We then describe a strategy how to select the most informative dissimilarities for a given set of objects. We suggest to select (and measure) only those elements whose knowledge maximizes the expected reduction in the SOM cost function. We find that active data selection is computationally expensive, but may reduce the number of necessary dissimilarities by more than a factor of two compared to a random selection strategy. This makes active data selection a viable alternative when the cost of actually measuring dissimilarities between data objects comes high.
Bibtex Type of Publication Selected:quantization
Link to publication Download Bibtex entry

Zusatzinformationen / Extras

Quick Access:

Schnellnavigation zur Seite über Nummerneingabe

Auxiliary Functions