Search Results for author: Xiao-Tong Yuan

Found 24 papers, 3 papers with code

Iterative Regularization with k-support Norm: An Important Complement to Sparse Recovery

1 code implementation19 Dec 2023 William de Vazelhes, Bhaskar Mukhoty, Xiao-Tong Yuan, Bin Gu

However, most of those iterative methods are based on the $\ell_1$ norm which requires restrictive applicability conditions and could fail in many cases.

Sharper Analysis for Minibatch Stochastic Proximal Point Methods: Stability, Smoothness, and Deviation

no code implementations9 Jan 2023 Xiao-Tong Yuan, Ping Li

The stochastic proximal point (SPP) methods have gained recent attention for stochastic optimization, with strong convergence guarantees and superior robustness to the classic stochastic gradient descent (SGD) methods showcased at little to no cost of computational overhead added.

Stochastic Optimization

Zeroth-Order Hard-Thresholding: Gradient Error vs. Expansivity

no code implementations11 Oct 2022 William de Vazelhes, Hualin Zhang, Huimin Wu, Xiao-Tong Yuan, Bin Gu

To solve this puzzle, in this paper, we focus on the $\ell_0$ constrained black-box stochastic optimization problems, and propose a new stochastic zeroth-order gradient hard-thresholding (SZOHT) algorithm with a general ZO gradient estimator powered by a novel random support sampling.

Portfolio Optimization Sparse Learning +1

On Convergence of FedProx: Local Dissimilarity Invariant Bounds, Non-smoothness and Beyond

no code implementations10 Jun 2022 Xiao-Tong Yuan, Ping Li

The FedProx algorithm is a simple yet powerful distributed proximal point optimization method widely used for federated learning (FL) over heterogeneous data.

Federated Learning

Boosting the Confidence of Generalization for $L_2$-Stable Randomized Learning Algorithms

no code implementations8 Jun 2022 Xiao-Tong Yuan, Ping Li

We further substantialize these generic results to stochastic gradient descent (SGD) to derive improved high-probability generalization bounds for convex or non-convex optimization problems with natural time decaying learning rates, which have not been possible to prove with the existing hypothesis stability or uniform stability based results.

Generalization Bounds

Stability and Risk Bounds of Iterative Hard Thresholding

no code implementations17 Mar 2022 Xiao-Tong Yuan, Ping Li

In this paper, we analyze the generalization performance of the Iterative Hard Thresholding (IHT) algorithm widely used for sparse recovery problems.

Open-Ended Question Answering regression

A Theory-Driven Self-Labeling Refinement Method for Contrastive Representation Learning

no code implementations NeurIPS 2021 Pan Zhou, Caiming Xiong, Xiao-Tong Yuan, Steven Hoi

Although intuitive, such a native label assignment strategy cannot reveal the underlying semantic similarity between a query and its positives and negatives, and impairs performance, since some negatives are semantically similar to the query or even share the same semantic class as the query.

Contrastive Learning Representation Learning +2

DeepACG: Co-Saliency Detection via Semantic-Aware Contrast Gromov-Wasserstein Distance

no code implementations CVPR 2021 Kaihua Zhang, Mingliang Dong, Bo Liu, Xiao-Tong Yuan, Qingshan Liu

This dense correlation volumes enables the network to accurately discover the structured pair-wise pixel similarities among the common salient objects.

Saliency Detection

Hybrid Stochastic-Deterministic Minibatch Proximal Gradient: Less-Than-Single-Pass Optimization with Nearly Optimal Generalization

no code implementations ICML 2020 Pan Zhou, Xiao-Tong Yuan

Particularly, in the case of $\epsilon=\mathcal{O}\big(1/\sqrt{n}\big)$ which is at the order of intrinsic excess error bound of a learning model and thus sufficient for generalization, the stochastic gradient complexity bounds of HSDMPG for quadratic and generic loss functions are respectively $\mathcal{O} (n^{0. 875}\log^{1. 5}(n))$ and $\mathcal{O} (n^{0. 875}\log^{2. 25}(n))$, which to our best knowledge, for the first time achieve optimal generalization in less than a single pass over data.

Machine learning for faster and smarter fluorescence lifetime imaging microscopy

1 code implementation5 Aug 2020 Varun Mannam, Yide Zhang, Xiao-Tong Yuan, Cara Ravasio, Scott S. Howard

Fluorescence lifetime imaging microscopy (FLIM) is a powerful technique in biomedical research that uses the fluorophore decay rate to provide additional contrast in fluorescence microscopy.

BIG-bench Machine Learning lifetime image denoising

Meta-Learning with Network Pruning

no code implementations ECCV 2020 Hongduan Tian, Bo Liu, Xiao-Tong Yuan, Qingshan Liu

To remedy this deficiency, we propose a network pruning based meta-learning approach for overfitting reduction via explicitly controlling the capacity of network.

Few-Shot Learning Network Pruning

Efficient Meta Learning via Minibatch Proximal Update

no code implementations NeurIPS 2019 Pan Zhou, Xiao-Tong Yuan, Huan Xu, Shuicheng Yan, Jiashi Feng

We address the problem of meta-learning which learns a prior over hypothesis from a sample of meta-training tasks for fast adaptation on meta-testing tasks.

Few-Shot Learning

On Convergence of Distributed Approximate Newton Methods: Globalization, Sharper Bounds and Beyond

no code implementations6 Aug 2019 Xiao-Tong Yuan, Ping Li

We first introduce a simple variant of DANE equipped with backtracking line search, for which global asymptotic convergence and sharper local non-asymptotic convergence rate guarantees can be proved for both quadratic and non-quadratic strongly convex functions.

Efficient Stochastic Gradient Hard Thresholding

no code implementations NeurIPS 2018 Pan Zhou, Xiao-Tong Yuan, Jiashi Feng

To address these deficiencies, we propose an efficient hybrid stochastic gradient hard thresholding (HSG-HT) method that can be provably shown to have sample-size-independent gradient evaluation and hard thresholding complexity bounds.

Computational Efficiency

New Insight into Hybrid Stochastic Gradient Descent: Beyond With-Replacement Sampling and Convexity

no code implementations NeurIPS 2018 Pan Zhou, Xiao-Tong Yuan, Jiashi Feng

In this paper, we affirmatively answer this open question by showing that under WoRS and for both convex and non-convex problems, it is still possible for HSGD (with constant step-size) to match full gradient descent in rate of convergence, while maintaining comparable sample-size-independent incremental first-order oracle complexity to stochastic gradient descent.

Open-Ended Question Answering

Bidirectional-Convolutional LSTM Based Spectral-Spatial Feature Learning for Hyperspectral Image Classification

1 code implementation23 Mar 2017 Qingshan Liu, Feng Zhou, Renlong Hang, Xiao-Tong Yuan

In the network, the issue of spectral feature extraction is considered as a sequence learning problem, and a recurrent connection operator across the spectral domain is used to address it.

General Classification Hyperspectral Image Classification

Training Skinny Deep Neural Networks with Iterative Hard Thresholding Methods

no code implementations19 Jul 2016 Xiaojie Jin, Xiao-Tong Yuan, Jiashi Feng, Shuicheng Yan

In this paper, we propose an iterative hard thresholding (IHT) approach to train Skinny Deep Neural Networks (SDNNs).

Object Recognition

Additive Nearest Neighbor Feature Maps

no code implementations ICCV 2015 Zhenzhen Wang, Xiao-Tong Yuan, Qingshan Liu, Shuicheng Yan

In this paper, we present a concise framework to approximately construct feature maps for nonlinear additive kernels such as the Intersection, Hellinger's, and Chi^2 kernels.

Newton Greedy Pursuit: A Quadratic Approximation Method for Sparsity-Constrained Optimization

no code implementations CVPR 2014 Xiao-Tong Yuan, Qingshan Liu

The main theme of this type of methods is to evaluate the function gradient in the previous iteration to update the non-zero entries and their values in the next iteration.

Unsupervised Pretraining Encourages Moderate-Sparseness

no code implementations20 Dec 2013 Jun Li, Wei Luo, Jian Yang, Xiao-Tong Yuan

It is well known that direct training of deep neural networks will generally lead to poor results.

Gradient Hard Thresholding Pursuit for Sparsity-Constrained Optimization

no code implementations22 Nov 2013 Xiao-Tong Yuan, Ping Li, Tong Zhang

Numerical evidences show that our method is superior to the state-of-the-art greedy selection methods in sparse logistic regression and sparse precision matrix estimation tasks.

Compressive Sensing regression

Learning Pairwise Graphical Models with Nonlinear Sufficient Statistics

no code implementations21 Nov 2013 Xiao-Tong Yuan, Ping Li, Tong Zhang

We investigate a generic problem of learning pairwise exponential family graphical models with pairwise sufficient statistics defined by a global mapping function, e. g., Mercer kernels.

Computational Efficiency

Cannot find the paper you are looking for? You can Submit a new open access paper.