direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Learning Vector Quantization and Self-organizing Maps

Self-organizing maps, often termed Kohonen maps, are a versatile and widely used tool for exploratory data analysis. Here we were interested in mathematically characterizing the embedding properties of the Self-organizing Map. We proposed robust learning schemes using deterministic annealing and we investigated extensions of the Self-organizing Map to relational data representations which included pairwise data as a special case. Emphasis was given to formulations which are based on cost-functions and optimization, and we investigated, how the different variants of the Self-organizing map relate to each other and to the original Kohonen map. We also studied prototype-based classifiers related to Learning Vector Quantization with a particular focus on improved learning schemes. Self-organizing maps were also investigated in the context of understanding self-organization and pattern formation in neural development. For details see "Research" page "Models of Neural Development".

Acknowledgement: Research was funded by the Technische Universität Berlin.

Lupe

Selected Publications:

Self-Organizing Maps: Stationary States, Metastability and Convergence Rate
Citation key Erwin1992c
Author Erwin, E. and Obermayer, K. and Schulten, K.
Pages 35 – 45
Year 1992
ISSN 0340-1200
DOI 10.1007/BF00201800
Journal Biological Cybernetics
Volume 67
Publisher Springer-Verlag
Abstract We investigate the effect of various types of neighborhood function on the convergence rates and the presence or absence of metastable stationary states of Kohonen\'s self-organizing feature map algorithm in one dimension. We demonstrate that the time necessary to form a topographic representation of the unit interval [0, 1] may vary over several orders of magnitude depending on the range and also the shape of the neighborhood function, by which the weight changes of the neurons in the neighborhood of the winning neuron are scaled. We will prove that for neighborhood functions which are convex on an interval given by the length of the Kohonen chain there exist no metastable states. For all other neighborhood functions, metastable states are present and may trap the algorithm during the learning process. For the widely-used Gaussian function there exists a threshold for the width above which metastable states cannot exist. Due to the presence or absence of metastable states, convergence time is very sensitive to slight changes in the shape of the neighborhood function. Fastest convergence is achieved using neighborhood functions which are \"convex\" over a large range around the winner neuron and yet have large differences in value at neighboring neurons.
Bibtex Type of Publication Selected:quantization
Link to publication Link to original publication Download Bibtex entry

Zusatzinformationen / Extras

Quick Access:

Schnellnavigation zur Seite über Nummerneingabe

Auxiliary Functions