Search Results for author: Junhong Lin

Found 11 papers, 1 papers with code

Kernel Conjugate Gradient Methods with Random Projections

no code implementations5 Nov 2018 Junhong Lin, Volkan Cevher

We propose and study kernel conjugate gradient methods (KCGM) with random projections for least-squares regression over a separable Hilbert space.

Optimal Distributed Learning with Multi-pass Stochastic Gradient Methods

no code implementations ICML 2018 Junhong Lin, Volkan Cevher

We study generalization properties of distributed algorithms in the setting of nonparametric regression over a reproducing kernel Hilbert space (RKHS).

Optimal Rates of Sketched-regularized Algorithms for Least-Squares Regression over Hilbert Spaces

no code implementations ICML 2018 Junhong Lin, Volkan Cevher

We investigate regularized algorithms combining with projection for least-squares regression problem over a Hilbert space, covering nonparametric regression over a reproducing kernel Hilbert space.

Optimal Convergence for Distributed Learning with Stochastic Gradient Methods and Spectral Algorithms

no code implementations22 Jan 2018 Junhong Lin, Volkan Cevher

We then extend our results to spectral-regularization algorithms (SRA), including kernel ridge regression (KRR), kernel principal component analysis, and gradient methods.

Optimal Rates for Spectral Algorithms with Least-Squares Regression over Hilbert Spaces

no code implementations20 Jan 2018 Junhong Lin, Alessandro Rudi, Lorenzo Rosasco, Volkan Cevher

In this paper, we study regression problems over a separable Hilbert space with the square loss, covering non-parametric regression over a reproducing kernel Hilbert space.

Optimal Rates for Learning with Nyström Stochastic Gradient Methods

no code implementations21 Oct 2017 Junhong Lin, Lorenzo Rosasco

In the setting of nonparametric regression, we propose and study a combination of stochastic gradient methods with Nystr\"om subsampling, allowing multiple passes over the data and mini-batches.

Generalization Properties of Doubly Stochastic Learning Algorithms

no code implementations3 Jul 2017 Junhong Lin, Lorenzo Rosasco

In this paper, we provide an in-depth theoretical analysis for different variants of doubly stochastic learning algorithms within the setting of nonparametric regression in a reproducing kernel Hilbert space and considering the square loss.

Optimal Learning for Multi-pass Stochastic Gradient Methods

no code implementations NeurIPS 2016 Junhong Lin, Lorenzo Rosasco

We analyze the learning properties of the stochastic gradient method when multiple passes over the data and mini-batches are allowed.

Optimal Rates for Multi-pass Stochastic Gradient Methods

no code implementations28 May 2016 Junhong Lin, Lorenzo Rosasco

As a byproduct, we derive optimal convergence results for batch gradient methods (even in the non-attainable cases).

Generalization Properties and Implicit Regularization for Multiple Passes SGM

1 code implementation26 May 2016 Junhong Lin, Raffaello Camoriano, Lorenzo Rosasco

We study the generalization properties of stochastic gradient methods for learning with convex loss functions and linearly parameterized functions.

Iterative Regularization for Learning with Convex Loss Functions

no code implementations31 Mar 2015 Junhong Lin, Lorenzo Rosasco, Ding-Xuan Zhou

We consider the problem of supervised learning with convex loss functions and propose a new form of iterative regularization based on the subgradient method.

Cannot find the paper you are looking for? You can Submit a new open access paper.