direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Potential - Support Vector Machine (P-SVM)

We offer a first implementation of the new P-SVM algorithm, a software for support vector classification, regression and feature extraction. 

PSVM Documentation - Introduction to P-SVM and software features
PSVM Manual - Detailed description of all included software components and sample files
PSVM SMO Technical Report - Description of the algorithm used within the software
Software License - This software is published under the GNU General Public License

Download, compile and tryout the software version 1.31(beta) (2008-05-25) with samples and documentation files:
psvm.tar.gz - compressed files as tar-file
psvm.zip - same files in zip format

Version 1.3 from 2007-11-19: psvm13.tar.gzpsvm13.zip

- To take advantage of all functions of the software GNU-Plot is needed.
- One additional dataset ARCENE.zip is referenced in the sample section of the software.
- The SVM implementation libsvm is also referenced in the software, but not required.

Please report bugs and errors to Tilman Knebel. If you use this software in your application, please cite:

Support Vector Machines for Dyadic Data
Citation key Hochreiter2006b
Author Hochreiter, J. and Obermayer, K.
Pages 1472 – 1510
Year 2006
DOI 10.1162/neco.2006.18.6.1472
Journal Neural Comput.
Volume 18
Abstract We describe a new technique for the analysis of dyadic data, where two sets of objects (\"row\" and \"column\" objects) are characterized by a matrix of numerical values which describe their mutual relationships. The new technique, called \"Potential Support Vector Machine\" (P-SVM), is a large-margin method for the construction of classifiers and regression functions for the \"column\" objects. Contrary to standard support vector machine approaches, the P-SVM minimizes a scale-invariant capacity measure and requires a new set of constraints. As a result, the P-SVM method leads to a usually sparse expansion of the classification and regression functions in terms of the \"row\" rather than the \"column\" objects and can handle data and kernel matrices which are neither positive definite nor square. We then describe two complementary regularization schemes. The first scheme improves generalization performance for classification and regression tasks, the second scheme leads to the selection of a small, informative set of \"row\" \"support\" objects and can be applied to feature selection. Benchmarks for classification, regression, and feature selection tasks are performed with toy data as well as with several real world data sets. The results show, that the new method is at least competitive with but often performs better than the benchmarked standard methods for standard vectorial as well as for true dyadic data sets. In addition, a theoretical justification is provided for the new approach.
Bibtex Type of Publication Selected:main selected:structured selected:publications
Link to publication Link to original publication Download Bibtex entry

Zusatzinformationen / Extras

Quick Access:

Schnellnavigation zur Seite über Nummerneingabe

Auxiliary Functions