TU Berlin

Neural Information ProcessingLearning Vector Quantization and Self-organizing Maps

Neuronale Informationsverarbeitung

Page Content

to Navigation

Learning Vector Quantization and Self-organizing Maps

Self-organizing maps, often termed Kohonen maps, are a versatile and widely used tool for exploratory data analysis. Here we were interested in mathematically characterizing the embedding properties of the Self-organizing Map. We proposed robust learning schemes using deterministic annealing and we investigated extensions of the Self-organizing Map to relational data representations which included pairwise data as a special case. Emphasis was given to formulations which are based on cost-functions and optimization, and we investigated, how the different variants of the Self-organizing map relate to each other and to the original Kohonen map. We also studied prototype-based classifiers related to Learning Vector Quantization with a particular focus on improved learning schemes. Self-organizing maps were also investigated in the context of understanding self-organization and pattern formation in neural development. For details see "Research" page "Models of Neural Development".

Acknowledgement: Research was funded by the Technische Universität Berlin.

Lupe

Selected Publications:

Soft Learning Vector Quantization
Citation key Seo2003b
Author Seo, S. and Obermayer, K.
Pages 1589 – 1604
Year 2003
DOI 10.1162/089976603321891819
Journal Neural Computation
Volume 15
Number 7
Publisher MIT Press
Abstract Learning Vector Quantization is a popular class of adaptive nearest prototype classifiers for multiclass classification, but learning algorithms from this family have so far been proposed on heuristic grounds. Here we take a more principled approach and derive two variants of Learning Vector Quantization using a Gaussian mixture ansatz. We propose an objective function which is based on a likelihood ratio and we derive a learning rule using gradient descent. The new approach provides a way to extend the algorithms of the LVQ family to different distance measure and allows for the design of ``soft\?\? Learning Vector Quantization algorithms. Benchmark results show that the new methods lead to better classification performance than LVQ 2.1. An additional benefit of the new method is that model assumptions are made explicit, so that the method can be adapted more easily to different kinds of problems.
Bibtex Type of Publication Selected:quantization
Link to publication Link to original publication Download Bibtex entry

Navigation

Quick Access

Schnellnavigation zur Seite über Nummerneingabe