direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Analyse neuronaler Daten

Buchkapitel

Autonomous Learning of State Representations for Control: An Emerging Field Aims to Autonomously Learn State Representations for Reinforcement Learning Agents from Their Real-World Sensor Observations
Zitatschlüssel Boehmer2015c
Autor Böhmer, W. and Springenberg, J. T. and Boedecker, J. and Riedmiller, M. and Obermayer, K.
Buchtitel Künstliche Intelligenz
Seiten 353-362
Jahr 2015
ISSN 0933-1875, 1610-1987
DOI 10.1007/s13218-015-0356-1
Jahrgang 29
Nummer 4
Verlag Springer Berlin Heidelberg
Serie Technical Contribution
Zusammenfassung This article reviews an emerging field that aims for autonomous reinforcement learning (RL) directly on sensor-observations. Straightforward end-to-end RL has recently shown remarkable success, but relies on large amounts of samples. As this is not feasible in robotics, we review two approaches to learn intermediate state representations from previous experiences: deep auto-encoders and slow-feature analysis. We analyze theoretical properties of the representations and point to potential improvements.
Link zur Originalpublikation Download Bibtex Eintrag

Zusatzinformationen / Extras

Direktzugang:

Schnellnavigation zur Seite über Nummerneingabe