Search Results for author: Shixiang Chen

Found 17 papers, 4 papers with code

Global Convergence of Decentralized Retraction-Free Optimization on the Stiefel Manifold

no code implementations19 May 2024 Youbang Sun, Shixiang Chen, Alfredo Garcia, Shahin Shahrampour

Many classical and modern machine learning algorithms require solving optimization tasks under orthogonal constraints.

FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup for Non-IID Data

no code implementations18 Sep 2023 Hao Sun, Li Shen, Shixiang Chen, Jingwei Sun, Jing Li, Guangzhong Sun, DaCheng Tao

Federated learning is an emerging distributed machine learning method, enables a large number of clients to train a model without exchanging their local data.

Federated Learning Scheduling

Dynamic Regularized Sharpness Aware Minimization in Federated Learning: Approaching Global Consistency and Smooth Landscape

1 code implementation19 May 2023 Yan Sun, Li Shen, Shixiang Chen, Liang Ding, DaCheng Tao

In federated learning (FL), a cluster of local clients are chaired under the coordination of the global server and cooperatively train one model with privacy protection.

Federated Learning

Decentralized Weakly Convex Optimization Over the Stiefel Manifold

no code implementations31 Mar 2023 Jinxin Wang, Jiang Hu, Shixiang Chen, Zengde Deng, Anthony Man-Cho So

We focus on a class of non-smooth optimization problems over the Stiefel manifold in the decentralized setting, where a connected network of $n$ agents cooperatively minimize a finite-sum objective function with each component being weakly convex in the ambient Euclidean space.

AdaSAM: Boosting Sharpness-Aware Minimization with Adaptive Learning Rate and Momentum for Training Deep Neural Networks

no code implementations1 Mar 2023 Hao Sun, Li Shen, Qihuang Zhong, Liang Ding, Shixiang Chen, Jingwei Sun, Jing Li, Guangzhong Sun, DaCheng Tao

Integrating SAM with adaptive learning rate and momentum acceleration, dubbed AdaSAM, has already been explored empirically to train large-scale deep neural networks without theoretical guarantee due to the triple difficulties in analyzing the coupled perturbation step, adaptive learning rate and momentum step.

Penalized Proximal Policy Optimization for Safe Reinforcement Learning

no code implementations24 May 2022 Linrui Zhang, Li Shen, Long Yang, Shixiang Chen, Bo Yuan, Xueqian Wang, DaCheng Tao

Safe reinforcement learning aims to learn the optimal policy while satisfying safety constraints, which is essential in real-world applications.

reinforcement-learning Reinforcement Learning (RL) +1

Decentralized Riemannian Gradient Descent on the Stiefel Manifold

1 code implementation14 Feb 2021 Shixiang Chen, Alfredo Garcia, Mingyi Hong, Shahin Shahrampour

The global function is represented as a finite sum of smooth local functions, where each local function is associated with one agent and agents communicate with each other over an undirected connected graph.

Distributed Optimization

On the Local Linear Rate of Consensus on the Stiefel Manifold

no code implementations22 Jan 2021 Shixiang Chen, Alfredo Garcia, Mingyi Hong, Shahin Shahrampour

We study the convergence properties of Riemannian gradient method for solving the consensus problem (for an undirected connected graph) over the Stiefel manifold.

A Manifold Proximal Linear Method for Sparse Spectral Clustering with Application to Single-Cell RNA Sequencing Data Analysis

no code implementations18 Jul 2020 Zhongruo Wang, Bingyuan Liu, Shixiang Chen, Shiqian Ma, Lingzhou Xue, Hongyu Zhao

This paper considers a widely adopted model for SSC, which can be formulated as an optimization problem over the Stiefel manifold with nonsmooth and nonconvex objective.


Manifold Proximal Point Algorithms for Dual Principal Component Pursuit and Orthogonal Dictionary Learning

no code implementations5 May 2020 Shixiang Chen, Zengde Deng, Shiqian Ma, Anthony Man-Cho So

Second, we propose a stochastic variant of ManPPA called StManPPA, which is well suited for large-scale computation, and establish its sublinear convergence rate.

Dictionary Learning

Weakly Convex Optimization over Stiefel Manifold Using Riemannian Subgradient-Type Methods

1 code implementation12 Nov 2019 Xiao Li, Shixiang Chen, Zengde Deng, Qing Qu, Zhihui Zhu, Anthony Man Cho So

To the best of our knowledge, these are the first convergence guarantees for using Riemannian subgradient-type methods to optimize a class of nonconvex nonsmooth functions over the Stiefel manifold.

Dictionary Learning Vocal Bursts Type Prediction

An Alternating Manifold Proximal Gradient Method for Sparse PCA and Sparse CCA

no code implementations27 Mar 2019 Shixiang Chen, Shiqian Ma, Lingzhou Xue, Hui Zou

Sparse principal component analysis (PCA) and sparse canonical correlation analysis (CCA) are two essential techniques from high-dimensional statistics and machine learning for analyzing large-scale data.

Geometric descent method for convex composite minimization

no code implementations NeurIPS 2017 Shixiang Chen, Shiqian Ma, Wei Liu

In this paper, we extend the geometric descent method recently proposed by Bubeck, Lee and Singh to tackle nonsmooth and strongly convex composite problems.


Cannot find the paper you are looking for? You can Submit a new open access paper.