Search Results for author: Yiming Ying

Found 41 papers, 13 papers with code

Differentially Private Non-convex Learning for Multi-layer Neural Networks

no code implementations12 Oct 2023 Hanpu Shen, Cheng-Long Wang, Zihang Xiang, Yiming Ying, Di Wang

This paper focuses on the problem of Differentially Private Stochastic Optimization for (multi-layer) fully connected neural networks with a single output node.

Stochastic Optimization

Outlier Robust Adversarial Training

1 code implementation10 Sep 2023 Shu Hu, Zhenhuan Yang, Xin Wang, Yiming Ying, Siwei Lyu

Theoretically, we show that the learning objective of ORAT satisfies the $\mathcal{H}$-consistency in binary classification, which establishes it as a proper surrogate to adversarial 0/1 loss.

Adversarial Attack Binary Classification

Stability and Generalization of Stochastic Compositional Gradient Descent Algorithms

no code implementations7 Jul 2023 Ming Yang, Xiyuan Wei, Tianbao Yang, Yiming Ying

Then, we establish the compositional uniform stability results for two popular stochastic compositional gradient descent algorithms, namely SCGD and SCSC.

Learning Theory Meta-Learning

Generalization Guarantees of Gradient Descent for Multi-Layer Neural Networks

no code implementations26 May 2023 Puyu Wang, Yunwen Lei, Di Wang, Yiming Ying, Ding-Xuan Zhou

This sheds light on sufficient or necessary conditions for under-parameterized and over-parameterized NNs trained by GD to attain the desired risk rate of $O(1/\sqrt{n})$.

Fairness-aware Differentially Private Collaborative Filtering

no code implementations16 Mar 2023 Zhenhuan Yang, Yingqiang Ge, Congzhe Su, Dingxian Wang, Xiaoting Zhao, Yiming Ying

Recently, there has been an increasing adoption of differential privacy guided algorithms for privacy-preserving machine learning tasks.

Collaborative Filtering Fairness +1

Generalization Analysis for Contrastive Representation Learning

no code implementations24 Feb 2023 Yunwen Lei, Tianbao Yang, Yiming Ying, Ding-Xuan Zhou

For self-bounding Lipschitz loss functions, we further improve our results by developing optimistic bounds which imply fast rates in a low noise condition.

Contrastive Learning Generalization Bounds +1

Stability and Generalization Analysis of Gradient Methods for Shallow Neural Networks

no code implementations19 Sep 2022 Yunwen Lei, Rong Jin, Yiming Ying

While significant theoretical progress has been achieved, unveiling the generalization mystery of overparameterized neural networks still remains largely elusive.

Stability and Generalization for Markov Chain Stochastic Gradient Methods

no code implementations16 Sep 2022 Puyu Wang, Yunwen Lei, Yiming Ying, Ding-Xuan Zhou

To the best of our knowledge, this is the first generalization analysis of SGMs when the gradients are sampled from a Markov process.

Generalization Bounds Learning Theory

Differentially Private Stochastic Gradient Descent with Low-Noise

no code implementations9 Sep 2022 Puyu Wang, Yunwen Lei, Yiming Ying, Ding-Xuan Zhou

In this paper, we focus on the privacy and utility (measured by excess risk bounds) performances of differentially private stochastic gradient descent (SGD) algorithms in the setting of stochastic convex optimization.

Privacy Preserving

Minimax AUC Fairness: Efficient Algorithm with Provable Convergence

1 code implementation22 Aug 2022 Zhenhuan Yang, Yan Lok Ko, Kush R. Varshney, Yiming Ying

We conduct numerical experiments on both synthetic and real-world datasets to validate the effectiveness of the minimax framework and the proposed optimization algorithm.

Decision Making Fairness +1

AUC Maximization in the Era of Big Data and AI: A Survey

no code implementations28 Mar 2022 Tianbao Yang, Yiming Ying

We also identify and discuss remaining and emerging issues for deep AUC maximization, and provide suggestions on topics for future work.

Differentially Private SGDA for Minimax Problems

no code implementations22 Jan 2022 Zhenhuan Yang, Shu Hu, Yunwen Lei, Kush R. Varshney, Siwei Lyu, Yiming Ying

We further provide its utility analysis in the nonconvex-strongly-concave setting which is the first-ever-known result in terms of the primal population risk.

Label Distributionally Robust Losses for Multi-class Classification: Consistency, Robustness and Adaptivity

1 code implementation30 Dec 2021 Dixian Zhu, Yiming Ying, Tianbao Yang

We study a family of loss functions named label-distributionally robust (LDR) losses for multi-class classification that are formulated from distributionally robust optimization (DRO) perspective, where the uncertainty in the given label information are modeled and captured by taking the worse case of distributional weights.

Classification Consistency Multi-class Classification

Simple Stochastic and Online Gradient Descent Algorithms for Pairwise Learning

no code implementations NeurIPS 2021 Zhenhuan Yang, Yunwen Lei, Puyu Wang, Tianbao Yang, Yiming Ying

A popular approach to handle streaming data in pairwise learning is an online gradient descent (OGD) algorithm, where one needs to pair the current instance with a buffering set of previous instances with a sufficiently large size and therefore suffers from a scalability issue.

Generalization Bounds Metric Learning +1

Generalization Guarantee of SGD for Pairwise Learning

no code implementations NeurIPS 2021 Yunwen Lei, Mingrui Liu, Yiming Ying

We develop a novel high-probability generalization bound for uniformly-stable algorithms to incorporate the variance information for better generalization, based on which we establish the first nonsmooth learning algorithm to achieve almost optimal high-probability and dimension-independent generalization bounds in linear time.

Generalization Bounds Metric Learning

Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise Learning

1 code implementation23 Nov 2021 Zhenhuan Yang, Yunwen Lei, Puyu Wang, Tianbao Yang, Yiming Ying

A popular approach to handle streaming data in pairwise learning is an online gradient descent (OGD) algorithm, where one needs to pair the current instance with a buffering set of previous instances with a sufficiently large size and therefore suffers from a scalability issue.

Generalization Bounds Metric Learning +1

Memory-Based Optimization Methods for Model-Agnostic Meta-Learning and Personalized Federated Learning

1 code implementation9 Jun 2021 Bokun Wang, Zhuoning Yuan, Yiming Ying, Tianbao Yang

The proposed algorithms require sampling a constant number of tasks and data samples per iteration, making them suitable for the continual learning scenario.

Continual Learning Meta-Learning +2

Sum of Ranked Range Loss for Supervised Learning

1 code implementation7 Jun 2021 Shu Hu, Yiming Ying, Xin Wang, Siwei Lyu

A combination loss of AoRR and TKML is proposed as a new learning objective for improving the robustness of multi-label learning in the face of outliers in sample and labels alike.

Multi-class Classification Multi-Label Learning

Stability and Generalization of Stochastic Gradient Methods for Minimax Problems

1 code implementation8 May 2021 Yunwen Lei, Zhenhuan Yang, Tianbao Yang, Yiming Ying

In this paper, we provide a comprehensive generalization analysis of stochastic gradient methods for minimax problems under both convex-concave and nonconvex-nonconcave cases through the lens of algorithmic stability.

Generalization Bounds

Federated Deep AUC Maximization for Heterogeneous Data with a Constant Communication Complexity

1 code implementation9 Feb 2021 Zhuoning Yuan, Zhishuai Guo, Yi Xu, Yiming Ying, Tianbao Yang

Deep AUC (area under the ROC curve) Maximization (DAM) has attracted much attention recently due to its great potential for imbalanced data classification.

Federated Learning

Differentially Private SGD with Non-Smooth Losses

no code implementations22 Jan 2021 Puyu Wang, Yunwen Lei, Yiming Ying, Hai Zhang

We significantly relax these restrictive assumptions and establish privacy and generalization (utility) guarantees for private SGD algorithms using output and gradient perturbations associated with non-smooth convex losses.

Stochastic Hard Thresholding Algorithms for AUC Maximization

1 code implementation4 Nov 2020 Zhenhuan Yang, Baojian Zhou, Yunwen Lei, Yiming Ying

In this paper, we aim to develop stochastic hard thresholding algorithms for the important problem of AUC maximization in imbalanced classification.

imbalanced classification

Learning by Minimizing the Sum of Ranked Range

1 code implementation NeurIPS 2020 Shu Hu, Yiming Ying, Xin Wang, Siwei Lyu

In forming learning objectives, one oftentimes needs to aggregate a set of individual values to a single output.

Binary Classification General Classification +2

Online AUC Optimization for Sparse High-Dimensional Datasets

1 code implementation23 Sep 2020 Baojian Zhou, Yiming Ying, Steven Skiena

The Area Under the ROC Curve (AUC) is a widely used performance measure for imbalanced classification arising from many application domains where high-dimensional sparse data is abundant.

imbalanced classification Vocal Bursts Intensity Prediction

Fine-Grained Analysis of Stability and Generalization for Stochastic Gradient Descent

no code implementations ICML 2020 Yunwen Lei, Yiming Ying

In this paper, we provide a fine-grained analysis of stability and generalization for SGD by substantially relaxing these assumptions.

Generalization Bounds

Stochastic AUC Maximization with Deep Neural Networks

no code implementations ICLR 2020 Mingrui Liu, Zhuoning Yuan, Yiming Ying, Tianbao Yang

In this paper, we consider stochastic AUC maximization problem with a deep neural network as the predictive model.

Stochastic Proximal AUC Maximization

no code implementations14 Jun 2019 Yunwen Lei, Yiming Ying

In this paper we consider the problem of maximizing the Area under the ROC curve (AUC) which is a widely used performance metric in imbalanced classification and anomaly detection.

Anomaly Detection imbalanced classification

Dual Averaging Method for Online Graph-structured Sparsity

1 code implementation26 May 2019 Baojian Zhou, Feng Chen, Yiming Ying

Online learning algorithms update models via one sample per iteration, thus efficient to process large-scale datasets and useful to detect malicious events for social benefits, such as disease outbreak and traffic congestion on the fly.

Stochastic Iterative Hard Thresholding for Graph-structured Sparsity Optimization

1 code implementation9 May 2019 Baojian Zhou, Feng Chen, Yiming Ying

Stochastic optimization algorithms update models with cheap per-iteration costs sequentially, which makes them amenable for large-scale data analysis.

Stochastic Optimization

Stability and Optimization Error of Stochastic Gradient Descent for Pairwise Learning

no code implementations25 Apr 2019 Wei Shen, Zhenhuan Yang, Yiming Ying, Xiaoming Yuan

From this fundamental trade-off, we obtain lower bounds for the optimization error of SGD algorithms and the excess expected risk over a class of pairwise losses.

Generalization Bounds Metric Learning

Stochastic Proximal Algorithms for AUC Maximization

no code implementations ICML 2018 Michael Natole, Yiming Ying, Siwei Lyu

Stochastic optimization algorithms such as SGDs update the model sequentially with cheap per-iteration costs, making them amenable for large-scale data analysis.

Classification General Classification +2

A Univariate Bound of Area Under ROC

no code implementations16 Apr 2018 Siwei Lyu, Yiming Ying

In this work, we describe a new surrogate loss based on a reformulation of the AUC risk, which does not require pairwise comparison but rankings of the predictions.

Binary Classification

Learning with Correntropy-induced Losses for Regression with Mixture of Symmetric Stable Noise

no code implementations1 Mar 2018 Yunlong Feng, Yiming Ying

Motivated by the practical way of generating non-Gaussian noise or outliers, we introduce mixture of symmetric stable noise, which include Gaussian noise, Cauchy noise, and their mixture as special cases, to model non-Gaussian noise or outliers.

regression

Learning with Average Top-k Loss

no code implementations NeurIPS 2017 Yanbo Fan, Siwei Lyu, Yiming Ying, Bao-Gang Hu

We further give a learning theory analysis of \matk learning on the classification calibration of the \atk loss and the error bounds of \atk-SVM.

Binary Classification General Classification +1

Stochastic Online AUC Maximization

no code implementations NeurIPS 2016 Yiming Ying, Longyin Wen, Siwei Lyu

From this saddle representation, a stochastic online algorithm (SOLAM) is proposed which has time and space complexity of one datum.

Unregularized Online Learning Algorithms with General Loss Functions

no code implementations2 Mar 2015 Yiming Ying, Ding-Xuan Zhou

Firstly, we derive explicit convergence rates of the unregularized online learning algorithms for classification associated with a general gamma-activating loss (see Definition 1 in the paper).

Online Pairwise Learning Algorithms with Kernels

no code implementations25 Feb 2015 Yiming Ying, Ding-Xuan Zhou

In this paper, we study an online algorithm for pairwise learning with a least-square loss function in an unconstrained setting of a reproducing kernel Hilbert space (RKHS), which we refer to as the Online Pairwise lEaRning Algorithm (OPERA).

Metric Learning

Guaranteed Classification via Regularized Similarity Learning

no code implementations13 Jun 2013 Zheng-Chu Guo, Yiming Ying

In this paper, we propose a regularized similarity learning formulation associated with general matrix-norms, and establish their generalization bounds.

BIG-bench Machine Learning Classification +3

Analysis of SVM with Indefinite Kernels

no code implementations NeurIPS 2009 Yiming Ying, Colin Campbell, Mark Girolami

The recent introduction of indefinite SVM by Luss and dAspremont [15] has effectively demonstrated SVM classification with a non-positive semi-definite kernel (indefinite kernel).

Sparse Metric Learning via Smooth Optimization

no code implementations NeurIPS 2009 Yiming Ying, Kai-Zhu Huang, Colin Campbell

From this saddle representation, we develop an efficient smooth optimization approach for sparse metric learning although the learning model is based on a non-differential loss function.

Dimensionality Reduction Metric Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.