TU Berlin

Neural Information ProcessingMachine Learning and Neural Networks for the Perceptually Relevant Analysis of Music

Neuronale Informationsverarbeitung

Page Content

to Navigation

Machine Learning and Neural Networks for the Perceptually Relevant Analysis of Music

Lupe

Music contains structural information as well as semantic connotations which are easy to perceive by human listeners, but which are difficult to extract automatically from an acoustic event (and even from the score of a given piece of music). Here we explore new techniques from the machine learning and the mathematical music theory fields with the goal to create semantically meaningful representations from acoustic events and to automatically extract perceptually relevant patterns from music and sound.

Acknowledgement: Research was funded by the EU and by the Technische Universität Berlin.

Selected Publications:

Computing Auditory Perception
Citation key Purwins2000b
Author Purwins, H. and Blankertz, B. and Obermayer, K.
Pages 159 – 171
Year 2000
Journal Organised Sound
Volume 5
Abstract In this paper the ingredients of computing auditory perception are reviewed. On the basic level there is neurophysiology, which is abstracted to artificial neural nets (ANNs) and enhanced by statistics to machine learning. There are high-level cognitive models derived from psychoacoustics (especially Gestalt principles). The gap between neuroscience and psychoacoustics has to be filled by numerics, statistics, and heuristics. Computerized auditory models have a broad and diverse range of applications: hearing aids and implants, compression in audio codices, automated music analysis, music composition, interactive music installations, and information retrieval from large databases of music samples.
Bibtex Type of Publication Selected:music
Link to publication Link to original publication Download Bibtex entry

Navigation

Quick Access

Schnellnavigation zur Seite über Nummerneingabe