Feature selection in the Laplacian support vector machine

Sangjun Lee, Changyi Park, Ja Yong Koo

Research output: Contribution to journalArticlepeer-review

10 Scopus citations

Abstract

Traditional classifiers including support vector machines use only labeled data in training. However, labeled instances are often difficult, costly, or time consuming to obtain while unlabeled instances are relatively easy to collect. The goal of semi-supervised learning is to improve the classification accuracy by using unlabeled data together with a few labeled data in training classifiers. Recently, the Laplacian support vector machine has been proposed as an extension of the support vector machine to semi-supervised learning. The Laplacian support vector machine has drawbacks in its interpretability as the support vector machine has. Also it performs poorly when there are many non-informative features in the training data because the final classifier is expressed as a linear combination of informative as well as non-informative features. We introduce a variant of the Laplacian support vector machine that is capable of feature selection based on functional analysis of variance decomposition. Through synthetic and benchmark data analysis, we illustrate that our method can be a useful tool in semi-supervised learning.

Original languageEnglish
Pages (from-to)567-577
Number of pages11
JournalComputational Statistics and Data Analysis
Volume55
Issue number1
DOIs
StatePublished - 1 Jan 2011

Keywords

  • Classification
  • Component selection and smoothing operator
  • Functional ANOVA decomposition
  • Manifold regularization
  • Semi-supervised learning

Fingerprint

Dive into the research topics of 'Feature selection in the Laplacian support vector machine'. Together they form a unique fingerprint.

Cite this