Search Results for author: Mingyi Hong

Found 85 papers, 16 papers with code

Min-Max Optimization without Gradients: Convergence and Applications to Black-Box Evasion and Poisoning Attacks

no code implementations ICML 2020 Sijia Liu, Songtao Lu, Xiangyi Chen, Yao Feng, Kaidi Xu, Abdullah Al-Dujaili, Mingyi Hong, Una-May O'Reilly

In this paper, we study the problem of constrained min-max optimization in a black-box setting, where the desired optimizer cannot access the gradients of the objective function but may query its values.

Improving the Sample and Communication Complexity for Decentralized Non-Convex Optimization: Joint Gradient Estimation and Tracking

no code implementations ICML 2020 Haoran Sun, Songtao Lu, Mingyi Hong

Similarly, for online problems, the proposed method achieves an $\mathcal{O}(m \epsilon^{-3/2})$ sample complexity and an $\mathcal{O}(\epsilon^{-1})$ communication complexity, while the best existing bounds are $\mathcal{O}(m\epsilon^{-2})$ and $\mathcal{O}(\epsilon^{-2})$.

Stochastic Optimization

Differentially Private SGD Without Clipping Bias: An Error-Feedback Approach

no code implementations24 Nov 2023 Xinwei Zhang, Zhiqi Bu, Zhiwei Steven Wu, Mingyi Hong

In our work, we propose a new error-feedback (EF) DP algorithm as an alternative to DPSGD-GC, which not only offers a diminishing utility bound without inducing a constant clipping bias, but more importantly, it allows for an arbitrary choice of clipping threshold that is independent of the problem.

Demystifying Poisoning Backdoor Attacks from a Statistical Perspective

no code implementations16 Oct 2023 Ganghua Wang, Xun Xian, Jayanth Srinivasa, Ashish Kundu, Xuan Bi, Mingyi Hong, Jie Ding

The growing dependence on machine learning in real-world applications emphasizes the importance of understanding and ensuring its safety.

Backdoor Attack

An Introduction to Bi-level Optimization: Foundations and Applications in Signal Processing and Machine Learning

no code implementations1 Aug 2023 Yihua Zhang, Prashant Khanduri, Ioannis Tsaknakis, Yuguang Yao, Mingyi Hong, Sijia Liu

Overall, we hope that this article can serve to accelerate the adoption of BLO as a generic tool to model, analyze, and innovate on a wide array of emerging SP and ML applications.

GLASU: A Communication-Efficient Algorithm for Federated Learning with Vertically Distributed Graph Data

no code implementations16 Mar 2023 Xinwei Zhang, Mingyi Hong, Jie Chen

In this paper, we propose a model splitting method that splits a backbone GNN across the clients and the server and a communication-efficient algorithm, GLASU, to train such a model.

Federated Learning

What Is Missing in IRM Training and Evaluation? Challenges and Solutions

no code implementations4 Mar 2023 Yihua Zhang, Pranay Sharma, Parikshit Ram, Mingyi Hong, Kush Varshney, Sijia Liu

We propose a new IRM variant to address this limitation based on a novel viewpoint of ensemble IRM games as consensus-constrained bi-level optimization.

Out-of-Distribution Generalization

Understanding Expertise through Demonstrations: A Maximum Likelihood Framework for Offline Inverse Reinforcement Learning

1 code implementation15 Feb 2023 Siliang Zeng, Chenliang Li, Alfredo Garcia, Mingyi Hong

Offline inverse reinforcement learning (Offline IRL) aims to recover the structure of rewards and environment dynamics that underlie observed actions in a fixed, finite set of demonstrations from an expert agent.

Autonomous Driving Continuous Control +2

On the Robustness of deep learning-based MRI Reconstruction to image transformations

no code implementations9 Nov 2022 Jinghan Jia, Mingyi Hong, Yimeng Zhang, Mehmet Akçakaya, Sijia Liu

We find a new instability source of MRI image reconstruction, i. e., the lack of reconstruction robustness against spatial transformations of an input, e. g., rotation and cutout.

Image Classification MRI Reconstruction

When Expressivity Meets Trainability: Fewer than $n$ Neurons Can Work

no code implementations NeurIPS 2021 Jiawei Zhang, Yushun Zhang, Mingyi Hong, Ruoyu Sun, Zhi-Quan Luo

Third, we consider a constrained optimization formulation where the feasible region is the nice local region, and prove that every KKT point is a nearly global minimizer.

Advancing Model Pruning via Bi-level Optimization

1 code implementation8 Oct 2022 Yihua Zhang, Yuguang Yao, Parikshit Ram, Pu Zhao, Tianlong Chen, Mingyi Hong, Yanzhi Wang, Sijia Liu

To reduce the computation overhead, various efficient 'one-shot' pruning methods have been developed, but these schemes are usually unable to find winning tickets as good as IMP.

Structural Estimation of Markov Decision Processes in High-Dimensional State Space with Finite-Time Guarantees

no code implementations4 Oct 2022 Siliang Zeng, Mingyi Hong, Alfredo Garcia

Other approaches in the inverse reinforcement learning (IRL) literature emphasize policy estimation at the expense of reduced reward estimation accuracy.

Imitation Learning

Maximum-Likelihood Inverse Reinforcement Learning with Finite-Time Guarantees

no code implementations4 Oct 2022 Siliang Zeng, Chenliang Li, Alfredo Garcia, Mingyi Hong

To reduce the computational burden of a nested loop, novel methods such as SQIL [1] and IQ-Learn [2] emphasize policy estimation at the expense of reward estimation accuracy.

counterfactual Imitation Learning +2

A Framework for Understanding Model Extraction Attack and Defense

no code implementations23 Jun 2022 Xun Xian, Mingyi Hong, Jie Ding

The privacy of machine learning models has become a significant concern in many emerging Machine-Learning-as-a-Service applications, where prediction services based on well-trained models are offered to users via pay-per-query.

Adversarial Attack BIG-bench Machine Learning +1

Distributed Adversarial Training to Robustify Deep Neural Networks at Scale

2 code implementations13 Jun 2022 Gaoyuan Zhang, Songtao Lu, Yihua Zhang, Xiangyi Chen, Pin-Yu Chen, Quanfu Fan, Lee Martie, Lior Horesh, Mingyi Hong, Sijia Liu

Spurred by that, we propose distributed adversarial training (DAT), a large-batch adversarial training framework implemented over multiple machines.

Distributed Optimization

Optimal Solutions for Joint Beamforming and Antenna Selection: From Branch and Bound to Graph Neural Imitation Learning

no code implementations11 Jun 2022 Sagar Shrestha, Xiao Fu, Mingyi Hong

This work revisits the joint beamforming (BF) and antenna selection (AS) problem, as well as its robust beamforming (RBF) version under imperfect channel state information (CSI).

Imitation Learning

Zeroth-Order SciML: Non-intrusive Integration of Scientific Software with Deep Learning

no code implementations4 Jun 2022 Ioannis Tsaknakis, Bhavya Kailkhura, Sijia Liu, Donald Loveland, James Diffenderfer, Anna Maria Hiszpanski, Mingyi Hong

Existing knowledge integration approaches are limited to using differentiable knowledge source to be compatible with first-order DL training paradigm.

Understanding A Class of Decentralized and Federated Optimization Algorithms: A Multi-Rate Feedback Control Perspective

no code implementations27 Apr 2022 Xinwei Zhang, Mingyi Hong, Nicola Elia

Distributed algorithms have been playing an increasingly important role in many applications such as machine learning, signal processing, and control.

Distributed Optimization

How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

1 code implementation ICLR 2022 Yimeng Zhang, Yuguang Yao, Jinghan Jia, JinFeng Yi, Mingyi Hong, Shiyu Chang, Sijia Liu

To tackle this problem, we next propose to prepend an autoencoder (AE) to a given (black-box) model so that DS can be trained using variance-reduced ZO optimization.

Adversarial Robustness Image Classification +1

To Supervise or Not: How to Effectively Learn Wireless Interference Management Models?

no code implementations28 Dec 2021 Bingqing Song, Haoran Sun, Wenqiang Pu, Sijia Liu, Mingyi Hong

We then provide a series of theoretical results to further understand the properties of the two approaches.


Revisiting and Advancing Fast Adversarial Training Through The Lens of Bi-Level Optimization

2 code implementations23 Dec 2021 Yihua Zhang, Guanhua Zhang, Prashant Khanduri, Mingyi Hong, Shiyu Chang, Sijia Liu

We first show that the commonly-used Fast-AT is equivalent to using a stochastic gradient algorithm to solve a linearized BLO problem involving a sign operation.

Adversarial Defense

Dynamic Differential-Privacy Preserving SGD

no code implementations30 Oct 2021 Jian Du, Song Li, Xiangyi Chen, Siheng Chen, Mingyi Hong

The equivalent privacy costs controlled by maintaining the same gradient clipping thresholds and noise powers in each step result in unstable updates and a lower model accuracy when compared to the non-DP counterpart.

Federated Learning Image Classification +1

Learning to Coordinate in Multi-Agent Systems: A Coordinated Actor-Critic Algorithm and Finite-Time Guarantees

no code implementations11 Oct 2021 Siliang Zeng, Tianyi Chen, Alfredo Garcia, Mingyi Hong

The flexibility in our design allows the proposed MARL-CAC algorithm to be used in a {\it fully decentralized} setting, where the agents can only communicate with their neighbors, as well as a {\it federated} setting, where the agents occasionally communicate with a server while optimizing their (partially personalized) local models.

Multi-agent Reinforcement Learning

Inducing Equilibria via Incentives: Simultaneous Design-and-Play Ensures Global Convergence

no code implementations4 Oct 2021 Boyi Liu, Jiayang Li, Zhuoran Yang, Hoi-To Wai, Mingyi Hong, Yu Marco Nie, Zhaoran Wang

To regulate a social system comprised of self-interested agents, economic incentives are often required to induce a desirable outcome.

Bilevel Optimization

Understanding Clipping for Federated Learning: Convergence and Client-Level Differential Privacy

no code implementations25 Jun 2021 Xinwei Zhang, Xiangyi Chen, Mingyi Hong, Zhiwei Steven Wu, JinFeng Yi

Recently, there has been a line of work on incorporating the formal privacy notion of differential privacy with FL.

Federated Learning

STEM: A Stochastic Two-Sided Momentum Algorithm Achieving Near-Optimal Sample and Communication Complexities for Federated Learning

no code implementations NeurIPS 2021 Prashant Khanduri, Pranay Sharma, Haibo Yang, Mingyi Hong, Jia Liu, Ketan Rajawat, Pramod K. Varshney

Despite extensive research, for a generic non-convex FL problem, it is not clear, how to choose the WNs' and the server's update directions, the minibatch sizes, and the local update frequency, so that the WNs use the minimum number of samples and communication rounds to achieve the desired solution.

Federated Learning

Deep Spectrum Cartography: Completing Radio Map Tensors Using Learned Neural Models

1 code implementation1 May 2021 Sagar Shrestha, Xiao Fu, Mingyi Hong

However, such deep learning (DL)-based SC approaches encounter serious challenges in both off-line model learning (training) and completion (generalization), possibly because the latent state space for generating the radio maps is prohibitively large.

Spectrum Cartography

Stochastic Mirror Descent for Low-Rank Tensor Decomposition Under Non-Euclidean Losses

no code implementations29 Apr 2021 Wenqiang Pu, Shahana Ibrahim, Xiao Fu, Mingyi Hong

This work offers a unified stochastic algorithmic framework for large-scale CPD decomposition under a variety of non-Euclidean loss functions.

Tensor Decomposition

On Instabilities of Conventional Multi-Coil MRI Reconstruction to Small Adverserial Perturbations

no code implementations25 Feb 2021 Chi Zhang, Jinghan Jia, Burhaneddin Yaman, Steen Moeller, Sijia Liu, Mingyi Hong, Mehmet Akçakaya

Although deep learning (DL) has received much attention in accelerated MRI, recent studies suggest small perturbations may lead to instabilities in DL-based reconstructions, leading to concern for their clinical application.

MRI Reconstruction

Decentralized Riemannian Gradient Descent on the Stiefel Manifold

1 code implementation14 Feb 2021 Shixiang Chen, Alfredo Garcia, Mingyi Hong, Shahin Shahrampour

The global function is represented as a finite sum of smooth local functions, where each local function is associated with one agent and agents communicate with each other over an undirected connected graph.

Distributed Optimization

On the Local Linear Rate of Consensus on the Stiefel Manifold

no code implementations22 Jan 2021 Shixiang Chen, Alfredo Garcia, Mingyi Hong, Shahin Shahrampour

We study the convergence properties of Riemannian gradient method for solving the consensus problem (for an undirected connected graph) over the Stiefel manifold.

RMSprop can converge with proper hyper-parameter

no code implementations ICLR 2021 Naichen Shi, Dawei Li, Mingyi Hong, Ruoyu Sun

Removing this assumption allows us to establish a phase transition from divergence to non-divergence for RMSProp.

Towards Understanding Asynchronous Advantage Actor-critic: Convergence and Linear Speedup

no code implementations31 Dec 2020 Han Shen, Kaiqing Zhang, Mingyi Hong, Tianyi Chen

Asynchronous and parallel implementation of standard reinforcement learning (RL) algorithms is a key enabler of the tremendous success of modern RL.

Atari Games OpenAI Gym +1

Hybrid Federated Learning: Algorithms and Implementation

no code implementations22 Dec 2020 Xinwei Zhang, Wotao Yin, Mingyi Hong, Tianyi Chen

To the best of our knowledge, this is the first formulation and algorithm developed for the hybrid FL.

Federated Learning

Finding Second-Order Stationary Points Efficiently in Smooth Nonconvex Linearly Constrained Optimization Problems

no code implementations NeurIPS 2020 Songtao Lu, Meisam Razaviyayn, Bo Yang, Kejun Huang, Mingyi Hong

To the best of our knowledge, this is the first time that first-order algorithms with polynomial per-iteration complexity and global sublinear rate are designed to find SOSPs of the important class of non-convex problems with linear constraints (almost surely).

Provably Efficient Neural GTD for Off-Policy Learning

no code implementations NeurIPS 2020 Hoi-To Wai, Zhuoran Yang, Zhaoran Wang, Mingyi Hong

This paper studies a gradient temporal difference (GTD) algorithm using neural network (NN) function approximators to minimize the mean squared Bellman error (MSBE).

Learning to Continuously Optimize Wireless Resource In Episodically Dynamic Environment

4 code implementations16 Nov 2020 Haoran Sun, Wenqiang Pu, Minghe Zhu, Xiao Fu, Tsung-Hui Chang, Mingyi Hong

We propose to build the notion of continual learning (CL) into the modeling process of learning wireless systems, so that the learning model can incrementally adapt to the new episodes, {\it without forgetting} knowledge learned from the previous episodes.

Continual Learning Fairness

Learning to Beamform in Heterogeneous Massive MIMO Networks

no code implementations8 Nov 2020 Minghe Zhu, Tsung-Hui Chang, Mingyi Hong

It is well-known that the problem of finding the optimal beamformers in massive multiple-input multiple-output (MIMO) networks is challenging because of its non-convexity, and conventional optimization based algorithms suffer from high computational costs.

Joint Channel Assignment and Power Allocation for Multi-UAV Communication

no code implementations19 Aug 2020 Lingyun Zhou, Xihan Chen, Mingyi Hong, Shi Jin, Qingjiang Shi

Unmanned aerial vehicle (UAV) swarm has emerged as a promising novel paradigm to achieve better coverage and higher capacity for future wireless network by exploiting the more favorable line-of-sight (LoS) propagation.

A Two-Timescale Framework for Bilevel Optimization: Complexity Analysis and Application to Actor-Critic

no code implementations10 Jul 2020 Mingyi Hong, Hoi-To Wai, Zhaoran Wang, Zhuoran Yang

Bilevel optimization is a class of problems which exhibit a two-level structure, and its goal is to minimize an outer objective function with variables which are constrained to be the optimal solution to an (inner) optimization problem.

Bilevel Optimization Hyperparameter Optimization

Understanding Gradient Clipping in Private SGD: A Geometric Perspective

no code implementations NeurIPS 2020 Xiangyi Chen, Zhiwei Steven Wu, Mingyi Hong

Deep learning models are increasingly popular in many machine learning applications where the training data may contain sensitive information.

Private Stochastic Non-Convex Optimization: Adaptive Algorithms and Tighter Generalization Bounds

no code implementations24 Jun 2020 Yingxue Zhou, Xiangyi Chen, Mingyi Hong, Zhiwei Steven Wu, Arindam Banerjee

We obtain this rate by providing the first analyses on a collection of private gradient-based methods, including adaptive algorithms DP RMSProp and DP Adam.

Generalization Bounds

On the Divergence of Decentralized Non-Convex Optimization

no code implementations20 Jun 2020 Mingyi Hong, Siliang Zeng, Junyu Zhang, Haoran Sun

However, by constructing some counter-examples, we show that when certain local Lipschitz conditions (LLC) on the local function gradient $\nabla f_i$'s are not satisfied, most of the existing decentralized algorithms diverge, even if the global Lipschitz condition (GLC) is satisfied, where the sum function $f$ has Lipschitz gradient.

Open-Ended Question Answering

Non-convex Min-Max Optimization: Applications, Challenges, and Recent Theoretical Advances

no code implementations15 Jun 2020 Meisam Razaviyayn, Tianjian Huang, Songtao Lu, Maher Nouiehed, Maziar Sanjabi, Mingyi Hong

The min-max optimization problem, also known as the saddle point problem, is a classical optimization problem which is also studied in the context of zero-sum games.

FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity to Non-IID Data

1 code implementation22 May 2020 Xinwei Zhang, Mingyi Hong, Sairaj Dhople, Wotao Yin, Yang Liu

Aiming at designing FL algorithms that are provably fast and require as few assumptions as possible, we propose a new algorithm design strategy from the primal-dual optimization perspective.

Federated Learning

Distributed Learning in the Non-Convex World: From Batch to Streaming Data, and Beyond

no code implementations14 Jan 2020 Tsung-Hui Chang, Mingyi Hong, Hoi-To Wai, Xinwei Zhang, Songtao Lu

In particular, we {provide a selective review} about the recent techniques developed for optimizing non-convex models (i. e., problem classes), processing batch and streaming data (i. e., data types), over the networks in a distributed manner (i. e., communication and computation paradigm).

A Communication Efficient Collaborative Learning Framework for Distributed Features

no code implementations24 Dec 2019 Yang Liu, Yan Kang, Xinwei Zhang, Liping Li, Yong Cheng, Tianjian Chen, Mingyi Hong, Qiang Yang

We introduce a collaborative learning framework allowing multiple parties having different sets of attributes about the same user to jointly build models without exposing their raw data or model parameters.

Dense Recurrent Neural Networks for Accelerated MRI: History-Cognizant Unrolling of Optimization Algorithms

no code implementations16 Dec 2019 Seyed Amir Hossein Hosseini, Burhaneddin Yaman, Steen Moeller, Mingyi Hong, Mehmet Akçakaya

These methods unroll iterative optimization algorithms to solve the inverse problem objective function, by alternating between domain-specific data consistency and data-driven regularization via neural networks.

MRI Reconstruction Rolling Shutter Correction

Variance Reduced Policy Evaluation with Smooth Function Approximation

no code implementations NeurIPS 2019 Hoi-To Wai, Mingyi Hong, Zhuoran Yang, Zhaoran Wang, Kexin Tang

Policy evaluation with smooth and nonlinear function approximation has shown great potential for reinforcement learning.

Spectrum Cartography via Coupled Block-Term Tensor Decomposition

no code implementations28 Nov 2019 Guoyong Zhang, Xiao Fu, Jun Wang, Xi-Le Zhao, Mingyi Hong

Spectrum cartography aims at estimating power propagation patterns over a geographical region across multiple frequency bands (i. e., a radio map)---from limited samples taken sparsely over the region.

Spectrum Cartography Tensor Decomposition

ZO-AdaMM: Zeroth-Order Adaptive Momentum Method for Black-Box Optimization

1 code implementation NeurIPS 2019 Xiangyi Chen, Sijia Liu, Kaidi Xu, Xingguo Li, Xue Lin, Mingyi Hong, David Cox

In this paper, we propose a zeroth-order AdaMM (ZO-AdaMM) algorithm, that generalizes AdaMM to the gradient-free regime.

Improving the Sample and Communication Complexity for Decentralized Non-Convex Optimization: A Joint Gradient Estimation and Tracking Approach

no code implementations13 Oct 2019 Haoran Sun, Songtao Lu, Mingyi Hong

Similarly, for online problems, the proposed method achieves an $\mathcal{O}(m \epsilon^{-3/2})$ sample complexity and an $\mathcal{O}(\epsilon^{-1})$ communication complexity, while the best existing bounds are $\mathcal{O}(m\epsilon^{-2})$ and $\mathcal{O}(\epsilon^{-2})$, respectively.

Stochastic Optimization

SNAP: Finding Approximate Second-Order Stationary Solutions Efficiently for Non-convex Linearly Constrained Problems

no code implementations9 Jul 2019 Songtao Lu, Meisam Razaviyayn, Bo Yang, Kejun Huang, Mingyi Hong

This paper proposes low-complexity algorithms for finding approximate second-order stationary points (SOSPs) of problems with smooth non-convex objective and linear constraints.

Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective

1 code implementation10 Jun 2019 Kaidi Xu, Hongge Chen, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Mingyi Hong, Xue Lin

Graph neural networks (GNNs) which apply the deep neural networks to graph data have achieved significant performance for the task of semi-supervised node classification.

Adversarial Robustness Classification +2

Learned Conjugate Gradient Descent Network for Massive MIMO Detection

1 code implementation10 Jun 2019 Yi Wei, Ming-Min Zhao, Mingyi Hong, Min-Jian Zhao, Ming Lei

Furthermore, in order to reduce the memory costs, a novel quantized LcgNet is proposed, where a low-resolution nonuniform quantizer is integrated into the LcgNet to smartly quantize the aforementioned step-sizes.

Distributed Training with Heterogeneous Data: Bridging Median- and Mean-Based Algorithms

no code implementations NeurIPS 2020 Xiangyi Chen, Tiancong Chen, Haoran Sun, Zhiwei Steven Wu, Mingyi Hong

We show that these algorithms are non-convergent whenever there is some disparity between the expected median and mean over the local gradients.

Federated Learning

signSGD via Zeroth-Order Oracle

no code implementations ICLR 2019 Sijia Liu, Pin-Yu Chen, Xiangyi Chen, Mingyi Hong

Our study shows that ZO signSGD requires $\sqrt{d}$ times more iterations than signSGD, leading to a convergence rate of $O(\sqrt{d}/\sqrt{T})$ under mild conditions, where $d$ is the number of optimization variables, and $T$ is the number of iterations.

Image Classification Stochastic Optimization

Understand the dynamics of GANs via Primal-Dual Optimization

no code implementations ICLR 2019 Songtao Lu, Rahul Singh, Xiangyi Chen, Yongxin Chen, Mingyi Hong

By developing new primal-dual optimization tools, we show that, with a proper stepsize choice, the widely used first-order iterative algorithm in training GANs would in fact converge to a stationary solution with a sublinear rate.

Multi-Task Learning

Hybrid Block Successive Approximation for One-Sided Non-Convex Min-Max Problems: Algorithms and Applications

no code implementations21 Feb 2019 Songtao Lu, Ioannis Tsaknakis, Mingyi Hong, Yongxin Chen

In this work, we consider a block-wise one-sided non-convex min-max problem, in which the minimization problem consists of multiple blocks and is non-convex, while the maximization problem is (strongly) concave.

On the Global Convergence of Imitation Learning: A Case for Linear Quadratic Regulator

no code implementations11 Jan 2019 Qi Cai, Mingyi Hong, Yongxin Chen, Zhaoran Wang

We study the global convergence of generative adversarial imitation learning for linear quadratic regulators, which is posed as minimax optimization.

Imitation Learning reinforcement-learning +1

On the Convergence of A Class of Adam-Type Algorithms for Non-Convex Optimization

no code implementations ICLR 2019 Xiangyi Chen, Sijia Liu, Ruoyu Sun, Mingyi Hong

We prove that under our derived conditions, these methods can achieve the convergence rate of order $O(\log{T}/\sqrt{T})$ for nonconvex stochastic optimization.

Open-Ended Question Answering Stochastic Optimization

Gradient Primal-Dual Algorithm Converges to Second-Order Stationary Solution for Nonconvex Distributed Optimization Over Networks

no code implementations ICML 2018 Mingyi Hong, Meisam Razaviyayn, Jason Lee

In this work, we study two first-order primal-dual based algorithms, the Gradient Primal-Dual Algorithm (GPDA) and the Gradient Alternating Direction Method of Multipliers (GADMM), for solving a class of linearly constrained non-convex optimization problems.

Distributed Optimization

Multi-Agent Reinforcement Learning via Double Averaging Primal-Dual Optimization

no code implementations NeurIPS 2018 Hoi-To Wai, Zhuoran Yang, Zhaoran Wang, Mingyi Hong

Despite the success of single-agent reinforcement learning, multi-agent reinforcement learning (MARL) remains challenging due to complex interactions between agents.

Multi-agent Reinforcement Learning reinforcement-learning +1

Structured SUMCOR Multiview Canonical Correlation Analysis for Large-Scale Data

no code implementations24 Apr 2018 Charilaos I. Kanatsoulis, Xiao Fu, Nicholas D. Sidiropoulos, Mingyi Hong

In this work, we propose a new computational framework for large-scale SUMCOR GCCA that can easily incorporate a suite of structural regularizers which are frequently used in data analytics.

On the Sublinear Convergence of Randomly Perturbed Alternating Gradient Descent to Second Order Stationary Solutions

no code implementations28 Feb 2018 Songtao Lu, Mingyi Hong, Zhengdao Wang

The alternating gradient descent (AGD) is a simple but popular algorithm which has been applied to problems in optimization, machine learning, data ming, and signal processing, etc.

Zeroth Order Nonconvex Multi-Agent Optimization over Networks

no code implementations27 Oct 2017 Davood Hajinezhad, Mingyi Hong, Alfredo Garcia

In this paper, we consider distributed optimization problems over a multi-agent network, where each agent can only partially evaluate the objective function, and it is allowed to exchange messages with its immediate neighbors.

Distributed Optimization

A Nonconvex Splitting Method for Symmetric Nonnegative Matrix Factorization: Convergence Analysis and Optimality

no code implementations24 Mar 2017 Songtao Lu, Mingyi Hong, Zhengdao Wang

The proposed algorithm is guaranteed to converge to the set of Karush-Kuhn-Tucker (KKT) points of the nonconvex SymNMF problem.

Clustering Community Detection +2

Towards K-means-friendly Spaces: Simultaneous Deep Learning and Clustering

10 code implementations ICML 2017 Bo Yang, Xiao Fu, Nicholas D. Sidiropoulos, Mingyi Hong

To recover the `clustering-friendly' latent representations and to better cluster the data, we propose a joint DR and K-means clustering approach in which DR is accomplished via learning a deep neural network (DNN).

Clustering Dimensionality Reduction

On Faster Convergence of Cyclic Block Coordinate Descent-type Methods for Strongly Convex Minimization

no code implementations10 Jul 2016 Xingguo Li, Tuo Zhao, Raman Arora, Han Liu, Mingyi Hong

In particular, we first show that for a family of quadratic minimization problems, the iteration complexity $\mathcal{O}(\log^2(p)\cdot\log(1/\epsilon))$ of the CBCD-type methods matches that of the GD methods in term of dependency on $p$, up to a $\log^2 p$ factor.


Scalable and Flexible Multiview MAX-VAR Canonical Correlation Analysis

no code implementations31 May 2016 Xiao Fu, Kejun Huang, Mingyi Hong, Nicholas D. Sidiropoulos, Anthony Man-Cho So

Generalized canonical correlation analysis (GCCA) aims at finding latent low-dimensional common structure from multiple views (feature vectors in different domains) of the same entities.

NESTT: A Nonconvex Primal-Dual Splitting Method for Distributed and Stochastic Optimization

no code implementations NeurIPS 2016 Davood Hajinezhad, Mingyi Hong, Tuo Zhao, Zhaoran Wang

We study a stochastic and distributed algorithm for nonconvex problems whose objective consists of a sum of $N$ nonconvex $L_i/N$-smooth functions, plus a nonsmooth regularizer.

Stochastic Optimization

On Fast Convergence of Proximal Algorithms for SQRT-Lasso Optimization: Don't Worry About Its Nonsmooth Loss Function

no code implementations25 May 2016 Xingguo Li, Haoming Jiang, Jarvis Haupt, Raman Arora, Han Liu, Mingyi Hong, Tuo Zhao

Many machine learning techniques sacrifice convenient computational structures to gain estimation robustness and modeling flexibility.


Stochastic Proximal Gradient Consensus Over Random Networks

no code implementations28 Nov 2015 Mingyi Hong, Tsung-Hui Chang

We consider solving a convex, possibly stochastic optimization problem over a randomly time-varying multi-agent network.

Optimization and Control Information Theory Information Theory

Asynchronous Distributed ADMM for Large-Scale Optimization- Part II: Linear Convergence Analysis and Numerical Performance

no code implementations9 Sep 2015 Tsung-Hui Chang, Wei-Cheng Liao, Mingyi Hong, Xiangfeng Wang

Unfortunately, a direct synchronous implementation of such algorithm does not scale well with the problem size, as the algorithm speed is limited by the slowest computing nodes.


Asynchronous Distributed ADMM for Large-Scale Optimization- Part I: Algorithm and Convergence Analysis

no code implementations9 Sep 2015 Tsung-Hui Chang, Mingyi Hong, Wei-Cheng Liao, Xiangfeng Wang

By formulating the learning problem as a consensus problem, the ADMM can be used to solve the consensus problem in a fully parallel fashion over a computer network with a star topology.

Distributed Optimization

Parallel Successive Convex Approximation for Nonsmooth Nonconvex Optimization

1 code implementation NeurIPS 2014 Meisam Razaviyayn, Mingyi Hong, Zhi-Quan Luo, Jong-Shi Pang

In this work, we propose an inexact parallel BCD approach where at each iteration, a subset of the variables is updated in parallel by minimizing convex approximations of the original objective function.

Optimization and Control

Alternating direction method of multipliers for penalized zero-variance discriminant analysis

no code implementations21 Jan 2014 Brendan P. W. Ames, Mingyi Hong

To accomplish this task, we propose a heuristic, called sparse zero-variance discriminant analysis (SZVD), for simultaneously performing linear discriminant analysis and feature selection on high dimensional data.

feature selection General Classification +3

A Unified Convergence Analysis of Block Successive Minimization Methods for Nonsmooth Optimization

no code implementations11 Sep 2012 Meisam Razaviyayn, Mingyi Hong, Zhi-Quan Luo

The block coordinate descent (BCD) method is widely used for minimizing a continuous function f of several block variables.

Optimization and Control

Cannot find the paper you are looking for? You can Submit a new open access paper.