TU Berlin

Neuronale InformationsverarbeitungMaschinelles Lernen

Neuronale Informationsverarbeitung

Inhalt

zur Navigation

Maschnielles Lernen

Publikationen

Nonlinear Spike-And-Slab Sparse Coding for Interpretable Image Encoding
Zitatschlüssel Shelton2015
Autor Shelton, J. A. and Sheikh, A.-S. and Bornschein, J. and Sterne, P. and Lücke, J.
Seiten e0124088
Jahr 2015
DOI http://dx.doi.org/10.1371/journal.pone.0124088
Journal PLoS ONE
Jahrgang 10
Monat 05
Verlag Public Library of Science
Zusammenfassung Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process.
Link zur Originalpublikation Download Bibtex Eintrag

Navigation

Direktzugang

Schnellnavigation zur Seite über Nummerneingabe