direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Learning Vector Quantization and Self-organizing Maps

Self-organizing maps, often termed Kohonen maps, are a versatile and widely used tool for exploratory data analysis. Here we were interested in mathematically characterizing the embedding properties of the Self-organizing Map. We proposed robust learning schemes using deterministic annealing and we investigated extensions of the Self-organizing Map to relational data representations which included pairwise data as a special case. Emphasis was given to formulations which are based on cost-functions and optimization, and we investigated, how the different variants of the Self-organizing map relate to each other and to the original Kohonen map. We also studied prototype-based classifiers related to Learning Vector Quantization with a particular focus on improved learning schemes. Self-organizing maps were also investigated in the context of understanding self-organization and pattern formation in neural development. For details see "Research" page "Models of Neural Development".

Acknowledgement: Research was funded by the Technische Universität Berlin.

Lupe

Selected Publications:

Soft Nearest Prototype Classification
Citation key Seo2003a
Author Seo, S. and Bode, M. and Obermayer, K.
Pages 390 – 398
Year 2003
DOI 10.1109/TNN.2003.809407.
Journal IEEE Transactions on Neural Networks
Volume 14
Publisher IEEE
Abstract We propose a new method for the construction of nearest prototype classifiers which is based on a Gaussian mixture ansatz and which can be interpreted as an annealed version of Learning Vector Quantization. The algorithm performs a gradient descent on a cost-function minimizing the classification error on the training set. We investigate the properties of the algorithm and asses its performance fo several toy data sets and for an optcal letter classification task. Results sho i) that annealing in the dispersion parameter of the Gaussian kernels improves classification accuracy, ii) that classification resuolts are better than those obtained with standard Learning Vector Quantization (LVQ 2.1, LVQ 3) for equal numbers of prototypes and iii) that annealing of the width paramter improved the classification capability. Additionally, the principled approach provides an explanation of a number of features ofthe (heuristic) LV methods.
Bibtex Type of Publication Selected:quantization
Link to original publication Download Bibtex entry

Zusatzinformationen / Extras

Quick Access:

Schnellnavigation zur Seite über Nummerneingabe

Auxiliary Functions