A sparse large margin semi-supervised learning method

Hosik Choi, Jinseog Kim, Yongdai Kim

Research output: Contribution to journalArticlepeer-review

1 Scopus citations


In this paper, we propose a sparse semi-supervised learning method, which combines the large margin approach and the L1 constraint. The main contribution of the paper is to develop an efficient optimization algorithm. The objective function to be minimized in a large margin semi-supervised learning method is non-convex and non-differentiable, and hence special optimization algorithms are required. For this purpose, we develop an optimization algorithm, which is a hybrid of the CCCP and the gradient LASSO algorithm. The advantage of the proposed method over existing semi-supervised learning methods is that it can identify a small number of relevant input variables while keeping the prediction accuracy high. Also, the proposed algorithm is simple enough that it can be applied to various real problems without being much hampered by computational limitations. To confirm these advantages, we compare the proposed method with the standard semi-supervised method by analyzing simulated as well as real data sets.

Original languageEnglish
Pages (from-to)479-487
Number of pages9
JournalJournal of the Korean Statistical Society
Issue number4
StatePublished - Dec 2010


  • CCCP
  • Gradient LASSO
  • Semi-supervised learning
  • Support vector machines


Dive into the research topics of 'A sparse large margin semi-supervised learning method'. Together they form a unique fingerprint.

Cite this