Fast Iterative Kernel Principal Component Analysis

S. Günter, N. N. Schraudolph, and S. Vishwanathan. Fast Iterative Kernel Principal Component Analysis. Journal of Machine Learning Research, 8:1893–1918, 2007.

Download

pdf djvu ps.gz
2.0MB   424.5kB   2.7MB  

Abstract

We develop gain adaptation methods that improve convergence of the kernel Hebbian algorithm (KHA) for iterative kernel PCA (Kim et al., 2005). KHA has a scalar gain parameter which is either held constant or decreased according to a predetermined annealing schedule, leading to slow convergence. We accelerate it by incorporating the reciprocal of the current estimated eigenvalues as part of a gain vector. An additional normalization term then allows us to eliminate a tuning parameter in the annealing schedule. Finally we derive and apply stochastic meta-descent (SMD) gain vector adaptation (Schraudolph, 1999, 2002) in reproducing kernel Hilbert space to further speed up convergence. Experimental results on kernel PCA and spectral clustering of USPS digits, motion capture, image denoising, and image super-resolution tasks confirm that our methods converge substantially faster than conventional KHA. To demonstrate scalability, we perform kernel PCA on the entire MNIST data set.

BibTeX Entry

@article{GueSchVis07,
     author = {Simon G\"unter and Nicol N. Schraudolph and
               S.~V.~N. Vishwanathan},
      title = {\href{http://nic.schraudolph.org/pubs/GueSchVis07.pdf}{
               Fast Iterative Kernel Principal Component Analysis}},
      pages = {1893--1918},
    journal =  jmlr,
     volume =  8,
       year =  2007,
   b2h_type = {Journal Papers},
  b2h_topic = {>Stochastic Meta-Descent, Kernel Methods, Unsupervised Learning},
   abstract = {
    We develop gain adaptation methods that improve convergence of
    the kernel Hebbian algorithm (KHA) for iterative kernel PCA
    (Kim et al., 2005). KHA has a scalar gain parameter which is
    either held constant or decreased according to a predetermined
    annealing schedule, leading to slow convergence. We accelerate
    it by incorporating the reciprocal of the current estimated
    eigenvalues as part of a gain vector. An additional normalization
    term then allows us to eliminate a tuning parameter in the
    annealing schedule. Finally we derive and apply stochastic
    meta-descent (SMD) gain vector adaptation (Schraudolph, 1999,
    2002) in reproducing kernel Hilbert space to further speed up
    convergence. Experimental results on kernel PCA and spectral
    clustering of USPS digits, motion capture, image denoising, and
    image super-resolution tasks confirm that our methods converge
    substantially faster than conventional KHA. To demonstrate
    scalability, we perform kernel PCA on the entire MNIST data
    set.
}}

Generated by bib2html.pl (written by Patrick Riley) on Thu Sep 25, 2014 12:00:33