Search Results for author: Yunwen Lei

Found 39 papers, 4 papers with code

Optimizing ADMM and Over-Relaxed ADMM Parameters for Linear Quadratic Problems

no code implementations1 Jan 2024 Jintao Song, Wenqi Lu, Yunwen Lei, Yuchao Tang, Zhenkuan Pan, Jinming Duan

The Alternating Direction Method of Multipliers (ADMM) has gained significant attention across a broad spectrum of machine learning applications.

Deblurring Image Deblurring +2

Stability and Generalization for Minibatch SGD and Local SGD

no code implementations2 Oct 2023 Yunwen Lei, Tao Sun, Mingrui Liu

We show both minibatch and local SGD achieve a linear speedup to attain the optimal risk bounds.

Generalization Guarantees of Gradient Descent for Multi-Layer Neural Networks

no code implementations26 May 2023 Puyu Wang, Yunwen Lei, Di Wang, Yiming Ying, Ding-Xuan Zhou

This sheds light on sufficient or necessary conditions for under-parameterized and over-parameterized NNs trained by GD to attain the desired risk rate of $O(1/\sqrt{n})$.

Generalization Analysis for Contrastive Representation Learning

no code implementations24 Feb 2023 Yunwen Lei, Tianbao Yang, Yiming Ying, Ding-Xuan Zhou

For self-bounding Lipschitz loss functions, we further improve our results by developing optimistic bounds which imply fast rates in a low noise condition.

Contrastive Learning Generalization Bounds +1

On Stability and Generalization of Bilevel Optimization Problem

no code implementations3 Oct 2022 Meng Ding, Mingxi Lei, Yunwen Lei, Di Wang, Jinhui Xu

In this paper, we conduct a thorough analysis on the generalization of first-order (gradient-based) methods for the bilevel optimization problem.

Bilevel Optimization Meta-Learning

Stability and Generalization Analysis of Gradient Methods for Shallow Neural Networks

no code implementations19 Sep 2022 Yunwen Lei, Rong Jin, Yiming Ying

While significant theoretical progress has been achieved, unveiling the generalization mystery of overparameterized neural networks still remains largely elusive.

Stability and Generalization for Markov Chain Stochastic Gradient Methods

no code implementations16 Sep 2022 Puyu Wang, Yunwen Lei, Yiming Ying, Ding-Xuan Zhou

To the best of our knowledge, this is the first generalization analysis of SGMs when the gradients are sampled from a Markov process.

Generalization Bounds Learning Theory

Differentially Private Stochastic Gradient Descent with Low-Noise

no code implementations9 Sep 2022 Puyu Wang, Yunwen Lei, Yiming Ying, Ding-Xuan Zhou

In this paper, we focus on the privacy and utility (measured by excess risk bounds) performances of differentially private stochastic gradient descent (SGD) algorithms in the setting of stochastic convex optimization.

Privacy Preserving

Stability and Generalization of Stochastic Optimization with Nonconvex and Nonsmooth Problems

no code implementations14 Jun 2022 Yunwen Lei

In this paper, we initialize a systematic stability and generalization analysis of stochastic optimization on nonconvex and nonsmooth problems.

Stochastic Optimization

Differentially Private SGDA for Minimax Problems

no code implementations22 Jan 2022 Zhenhuan Yang, Shu Hu, Yunwen Lei, Kush R. Varshney, Siwei Lyu, Yiming Ying

We further provide its utility analysis in the nonconvex-strongly-concave setting which is the first-ever-known result in terms of the primal population risk.

Fine-grained Generalization Analysis of Inductive Matrix Completion

no code implementations NeurIPS 2021 Antoine Ledent, Rodrigo Alves, Yunwen Lei, Marius Kloft

In this paper, we bridge the gap between the state-of-the-art theoretical results for matrix completion with the nuclear norm and their equivalent in \textit{inductive matrix completion}: (1) In the distribution-free setting, we prove bounds improving the previously best scaling of $O(rd^2)$ to $\widetilde{O}(d^{3/2}\sqrt{r})$, where $d$ is the dimension of the side information and $r$ is the rank.

Matrix Completion

Generalization Guarantee of SGD for Pairwise Learning

no code implementations NeurIPS 2021 Yunwen Lei, Mingrui Liu, Yiming Ying

We develop a novel high-probability generalization bound for uniformly-stable algorithms to incorporate the variance information for better generalization, based on which we establish the first nonsmooth learning algorithm to achieve almost optimal high-probability and dimension-independent generalization bounds in linear time.

Generalization Bounds Metric Learning

Simple Stochastic and Online Gradient Descent Algorithms for Pairwise Learning

no code implementations NeurIPS 2021 Zhenhuan Yang, Yunwen Lei, Puyu Wang, Tianbao Yang, Yiming Ying

A popular approach to handle streaming data in pairwise learning is an online gradient descent (OGD) algorithm, where one needs to pair the current instance with a buffering set of previous instances with a sufficiently large size and therefore suffers from a scalability issue.

Generalization Bounds Metric Learning +1

Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise Learning

1 code implementation23 Nov 2021 Zhenhuan Yang, Yunwen Lei, Puyu Wang, Tianbao Yang, Yiming Ying

A popular approach to handle streaming data in pairwise learning is an online gradient descent (OGD) algorithm, where one needs to pair the current instance with a buffering set of previous instances with a sufficiently large size and therefore suffers from a scalability issue.

Generalization Bounds Metric Learning +1

Learning Interpretable Concept Groups in CNNs

1 code implementation21 Sep 2021 Saurabh Varshneya, Antoine Ledent, Robert A. Vandermeulen, Yunwen Lei, Matthias Enders, Damian Borth, Marius Kloft

We propose a novel training methodology -- Concept Group Learning (CGL) -- that encourages training of interpretable CNN filters by partitioning filters in each layer into concept groups, each of which is trained to learn a single visual concept.

Stability and Generalization for Randomized Coordinate Descent

no code implementations17 Aug 2021 Puyu Wang, Liang Wu, Yunwen Lei

Randomized coordinate descent (RCD) is a popular optimization algorithm with wide applications in solving various machine learning problems, which motivates a lot of theoretical analysis on its convergence behavior.

Generalization Bounds

Fine-grained Generalization Analysis of Structured Output Prediction

no code implementations31 May 2021 Waleed Mustafa, Yunwen Lei, Antoine Ledent, Marius Kloft

Existing generalization analysis implies generalization bounds with at least a square-root dependency on the cardinality $d$ of the label set, which can be vacuous in practice.

Generalization Bounds speech-recognition +1

Stability and Generalization of Stochastic Gradient Methods for Minimax Problems

1 code implementation8 May 2021 Yunwen Lei, Zhenhuan Yang, Tianbao Yang, Yiming Ying

In this paper, we provide a comprehensive generalization analysis of stochastic gradient methods for minimax problems under both convex-concave and nonconvex-nonconcave cases through the lens of algorithmic stability.

Generalization Bounds

Fine-grained Generalization Analysis of Vector-valued Learning

no code implementations29 Apr 2021 Liang Wu, Antoine Ledent, Yunwen Lei, Marius Kloft

In this paper, we initiate the generalization analysis of regularized vector-valued learning algorithms by presenting bounds with a mild dependency on the output dimension and a fast rate on the sample size.

Extreme Multi-Label Classification General Classification +2

Differentially Private SGD with Non-Smooth Losses

no code implementations22 Jan 2021 Puyu Wang, Yunwen Lei, Yiming Ying, Hai Zhang

We significantly relax these restrictive assumptions and establish privacy and generalization (utility) guarantees for private SGD algorithms using output and gradient perturbations associated with non-smooth convex losses.

Sharper Generalization Bounds for Pairwise Learning

no code implementations NeurIPS 2020 Yunwen Lei, Antoine Ledent, Marius Kloft

Pairwise learning refers to learning tasks with loss functions depending on a pair of training examples, which includes ranking and metric learning as specific examples.

Generalization Bounds Metric Learning

Stochastic Hard Thresholding Algorithms for AUC Maximization

1 code implementation4 Nov 2020 Zhenhuan Yang, Baojian Zhou, Yunwen Lei, Yiming Ying

In this paper, we aim to develop stochastic hard thresholding algorithms for the important problem of AUC maximization in imbalanced classification.

imbalanced classification

Fine-Grained Analysis of Stability and Generalization for Stochastic Gradient Descent

no code implementations ICML 2020 Yunwen Lei, Yiming Ying

In this paper, we provide a fine-grained analysis of stability and generalization for SGD by substantially relaxing these assumptions.

Generalization Bounds

Optimal Stochastic and Online Learning with Individual Iterates

no code implementations NeurIPS 2019 Yunwen Lei, Peng Yang, Ke Tang, Ding-Xuan Zhou

In this paper, we propose a theoretically sound strategy to select an individual iterate of the vanilla SCMD, which is able to achieve optimal rates for both convex and strongly convex problems in a non-smooth learning setting.

Sparse Learning

On Performance Estimation in Automatic Algorithm Configuration

no code implementations19 Nov 2019 Shengcai Liu, Ke Tang, Yunwen Lei, Xin Yao

Over the last decade, research on automated parameter tuning, often referred to as automatic algorithm configuration (AAC), has made significant progress.

Stochastic Proximal AUC Maximization

no code implementations14 Jun 2019 Yunwen Lei, Yiming Ying

In this paper we consider the problem of maximizing the Area under the ROC curve (AUC) which is a widely used performance metric in imbalanced classification and anomaly detection.

Anomaly Detection imbalanced classification

Norm-based generalisation bounds for multi-class convolutional neural networks

no code implementations29 May 2019 Antoine Ledent, Waleed Mustafa, Yunwen Lei, Marius Kloft

This holds even when formulating the bounds in terms of the $L^2$-norm of the weight matrices, where previous bounds exhibit at least a square-root dependence on the number of classes.

A Generalization Error Bound for Multi-class Domain Generalization

no code implementations24 May 2019 Aniket Anand Deshmukh, Yunwen Lei, Srinagesh Sharma, Urun Dogan, James W. Cutler, Clayton Scott

Domain generalization is the problem of assigning labels to an unlabeled data set, given several similar data sets for which labels have been provided.

Classification Domain Generalization +2

Stochastic Gradient Descent for Nonconvex Learning without Bounded Gradient Assumptions

no code implementations3 Feb 2019 Yunwen Lei, Ting Hu, Guiying Li, Ke Tang

While the behavior of SGD is well understood in the convex learning setting, the existing theoretical results for SGD applied to nonconvex objective functions are far from mature.

Stochastic Composite Mirror Descent: Optimal Bounds with High Probabilities

no code implementations NeurIPS 2018 Yunwen Lei, Ke Tang

We apply the derived computational error bounds to study the generalization performance of multi-pass stochastic gradient descent (SGD) in a non-parametric setting.

Generalization Bounds Vocal Bursts Intensity Prediction

Convergence of Online Mirror Descent

no code implementations18 Feb 2018 Yunwen Lei, Ding-Xuan Zhou

The condition is $\lim_{t\to\infty}\eta_t=0, \sum_{t=1}^{\infty}\eta_t=\infty$ in the case of positive variances.

Convergence of Unregularized Online Learning Algorithms

no code implementations9 Aug 2017 Yunwen Lei, Lei Shi, Zheng-Chu Guo

In this paper we study the convergence of online gradient descent algorithms in reproducing kernel Hilbert spaces (RKHSs) without regularization.

Data-dependent Generalization Bounds for Multi-class Classification

no code implementations29 Jun 2017 Yunwen Lei, Urun Dogan, Ding-Xuan Zhou, Marius Kloft

In this paper, we study data-dependent generalization error bounds exhibiting a mild dependency on the number of classes, making them suitable for multi-class learning with a large number of label classes.

Classification General Classification +2

Local Rademacher Complexity-based Learning Guarantees for Multi-Task Learning

no code implementations18 Feb 2016 Niloofar Yousefi, Yunwen Lei, Marius Kloft, Mansooreh Mollaghasemi, Georgios Anagnostopoulos

We show a Talagrand-type concentration inequality for Multi-Task Learning (MTL), using which we establish sharp excess risk bounds for MTL in terms of distribution- and data-dependent versions of the Local Rademacher Complexity (LRC).

Multi-Task Learning

Local Rademacher Complexity Bounds based on Covering Numbers

no code implementations6 Oct 2015 Yunwen Lei, Lixin Ding, Yingzhou Bi

This paper provides a general result on controlling local Rademacher complexities, which captures in an elegant form to relate the complexities with constraint on the expected norm to the corresponding ones with constraint on the empirical norm.

Localized Multiple Kernel Learning---A Convex Approach

no code implementations14 Jun 2015 Yunwen Lei, Alexander Binder, Ürün Dogan, Marius Kloft

We propose a localized approach to multiple kernel learning that can be formulated as a convex optimization problem over a given cluster structure.

Multi-class SVMs: From Tighter Data-Dependent Generalization Bounds to Novel Algorithms

no code implementations NeurIPS 2015 Yunwen Lei, Ürün Dogan, Alexander Binder, Marius Kloft

This paper studies the generalization performance of multi-class classification algorithms, for which we obtain, for the first time, a data-dependent generalization error bound with a logarithmic dependence on the class size, substantially improving the state-of-the-art linear dependence in the existing data-dependent generalization analysis.

General Classification Generalization Bounds +1

Cannot find the paper you are looking for? You can Submit a new open access paper.