Search Results for author: Zhi-Quan Luo

Found 52 papers, 17 papers with code

A Unified Convergence Analysis of Block Successive Minimization Methods for Nonsmooth Optimization

no code implementations11 Sep 2012 Meisam Razaviyayn, Mingyi Hong, Zhi-Quan Luo

The block coordinate descent (BCD) method is widely used for minimizing a continuous function f of several block variables.

Optimization and Control

On the Linear Convergence of the Proximal Gradient Method for Trace Norm Regularization

no code implementations NeurIPS 2013 Ke Hou, Zirui Zhou, Anthony Man-Cho So, Zhi-Quan Luo

Motivated by various applications in machine learning, the problem of minimizing a convex smooth loss function with trace norm regularization has received much attention lately.

BIG-bench Machine Learning

Parallel Successive Convex Approximation for Nonsmooth Nonconvex Optimization

1 code implementation NeurIPS 2014 Meisam Razaviyayn, Mingyi Hong, Zhi-Quan Luo, Jong-Shi Pang

In this work, we propose an inexact parallel BCD approach where at each iteration, a subset of the variables is updated in parallel by minimizing convex approximations of the original objective function.

Optimization and Control

Guaranteed Matrix Completion via Non-convex Factorization

no code implementations28 Nov 2014 Ruoyu Sun, Zhi-Quan Luo

In this paper, we establish a theoretical guarantee for the factorization formulation to correctly recover the underlying low-rank matrix.

Matrix Completion

Parallel Direction Method of Multipliers

no code implementations NeurIPS 2014 Huahua Wang, Arindam Banerjee, Zhi-Quan Luo

In this paper, we propose a parallel randomized block coordinate method named Parallel Direction Method of Multipliers (PDMM) to solve the optimization problems with multi-block linear constraints.

Bilinear Factor Matrix Norm Minimization for Robust PCA: Algorithms and Applications

no code implementations11 Oct 2018 Fanhua Shang, James Cheng, Yuanyuan Liu, Zhi-Quan Luo, Zhouchen Lin

The heavy-tailed distributions of corrupted outliers and singular values of all channels in low-level vision have proven effective priors for many applications such as background modeling, photometric stereo and image alignment.

Moving Object Detection object-detection

Inexact Block Coordinate Descent Algorithms for Nonsmooth Nonconvex Optimization

1 code implementation10 May 2019 Yang Yang, Marius Pesavento, Zhi-Quan Luo, Björn Ottersten

Interestingly, when the approximation subproblem is solved by a descent algorithm, convergence of a subsequence to a stationary point is still guaranteed even if the approximation subproblem is solved inexactly by terminating the descent algorithm after a finite number of iterations.

Anomaly Detection Retrieval

Optimally Combining Classifiers for Semi-Supervised Learning

1 code implementation7 Jun 2020 Zhiguo Wang, Liusha Yang, Feng Yin, Ke Lin, Qingjiang Shi, Zhi-Quan Luo

In this paper, we find these two methods have complementary properties and larger diversity, which motivates us to propose a new semi-supervised learning method that is able to adaptively combine the strengths of Xgboost and transductive support vector machine.

Improved RIP-Based Bounds for Guaranteed Performance of two Compressed Sensing Algorithms

no code implementations3 Jul 2020 Yun-Bin Zhao, Zhi-Quan Luo

The purpose of this paper is to affirmatively answer this question and rigorously show that the RIP-based bounds for guaranteed performance of IHT can be significantly improved to $ \delta_{3k} < (\sqrt{5}-1)/2 \approx 0. 618, $ and the bound for CoSaMP can be improved and pushed to $ \delta_{4k}< 0. 5102.

4k

Pushing The Limit of Type I Codebook For FDD Massive MIMO Beamforming: A Channel Covariance Reconstruction Approach

no code implementations22 Oct 2020 Kai Li, Ying Li, Lei Cheng, Qingjiang Shi, Zhi-Quan Luo

There is a fundamental trade-off between the channel representation resolution of codebooks and the overheads of feedback communications in the fifth generation new radio (5G NR) frequency division duplex (FDD) massive multiple-input and multiple-output (MIMO) systems.

Vocal Bursts Type Prediction

A Single-Loop Smoothed Gradient Descent-Ascent Algorithm for Nonconvex-Concave Min-Max Problems

no code implementations NeurIPS 2020 Jiawei Zhang, Peijun Xiao, Ruoyu Sun, Zhi-Quan Luo

We prove that the stabilized GDA algorithm can achieve an $O(1/\epsilon^2)$ iteration complexity for minimizing the pointwise maximum of a finite collection of nonconvex functions.

Distributed Stochastic Consensus Optimization with Momentum for Nonconvex Nonsmooth Problems

no code implementations10 Nov 2020 Zhiguo Wang, Jiawei Zhang, Tsung-Hui Chang, Jian Li, Zhi-Quan Luo

While many distributed optimization algorithms have been proposed for solving smooth or convex problems over the networks, few of them can handle non-convex and non-smooth problems.

Distributed Optimization

Disentangling Adversarial Robustness in Directions of the Data Manifold

1 code implementation1 Jan 2021 Jiancong Xiao, Liusha Yang, Zhi-Quan Luo

Standard adversarial training increases model robustness by extending the data manifold boundary in the small variance directions, while on the contrary, adversarial training with generative adversarial examples increases model robustness by extending the data manifold boundary in the large variance directions.

Adversarial Robustness

An efficient linear programming rounding-and-refinement algorithm for large-scale network slicing problem

no code implementations4 Feb 2021 Wei-Kun Chen, Ya-Feng Liu, Yu-Hong Dai, Zhi-Quan Luo

In this paper, we consider the network slicing problem which attempts to map multiple customized virtual network requests (also called services) to a common shared network infrastructure and allocate network resources to meet diverse service requirements, and propose an efficient two-stage algorithm for solving this NP-hard problem.

Networking and Internet Architecture Information Theory Signal Processing Information Theory Optimization and Control

Resource Reservation in Backhaul and Radio Access Network with Uncertain User Demands

no code implementations23 Feb 2021 Navid Reyhanian, Hamid Farmanbar, Zhi-Quan Luo

In this paper, we consider the problem of joint resource reservation in the backhaul and Radio Access Network (RAN) based on the statistics of user demands and channel states, and also network availability.

Decentralized Non-Convex Learning with Linearly Coupled Constraints

no code implementations9 Mar 2021 Jiawei Zhang, Songyang Ge, Tsung-Hui Chang, Zhi-Quan Luo

Motivated by the need for decentralized learning, this paper aims at designing a distributed algorithm for solving nonconvex problems with general linear constraints over a multi-agent network.

Optimization and Control Systems and Control Systems and Control

Data-Driven Adaptive Network Slicing for Multi-Tenant Networks

no code implementations7 Jun 2021 Navid Reyhanian, Zhi-Quan Luo

We propose a Frank-Wolfe algorithm to iteratively solve approximated problems in long time-scales.

On Generalization of Adversarial Imitation Learning and Beyond

no code implementations19 Jun 2021 Tian Xu, Ziniu Li, Yang Yu, Zhi-Quan Luo

For some MDPs, we show that vanilla AIL has a worse sample complexity than BC.

Imitation Learning

Efficient Estimation of Sensor Biases for the 3-Dimensional Asynchronous Multi-Sensor System

no code implementations4 Sep 2021 Wenqiang Pu, Ya-Feng Liu, Zhi-Quan Luo

There are generally two difficulties in this bias estimation problem: one is the unknown target states which serve as the nuisance variables in the estimation problem, and the other is the highly nonlinear coordinate transformation between the local and global coordinate systems of the sensors.

HyperDQN: A Randomized Exploration Method for Deep Reinforcement Learning

1 code implementation ICLR 2022 Ziniu Li, Yingru Li, Yushun Zhang, Tong Zhang, Zhi-Quan Luo

However, it is limited to the case where 1) a good feature is known in advance and 2) this feature is fixed during the training: if otherwise, RLSVI suffers an unbearable computational burden to obtain the posterior samples of the parameter in the $Q$-value function.

Efficient Exploration reinforcement-learning +1

Fast Generic Interaction Detection for Model Interpretability and Compression

no code implementations ICLR 2022 Tianjian Zhang, Feng Yin, Zhi-Quan Luo

The ability of discovering feature interactions in a black-box model is vital to explainable deep learning.

Rethinking ValueDice: Does It Really Improve Performance?

no code implementations5 Feb 2022 Ziniu Li, Tian Xu, Yang Yu, Zhi-Quan Luo

First, we show that ValueDice could reduce to BC under the offline setting.

Imitation Learning

Downlink Channel Covariance Matrix Reconstruction for FDD Massive MIMO Systems with Limited Feedback

no code implementations2 Apr 2022 Kai Li, Ying Li, Lei Cheng, Qingjiang Shi, Zhi-Quan Luo

The downlink channel covariance matrix (CCM) acquisition is the key step for the practical performance of massive multiple-input and multiple-output (MIMO) systems, including beamforming, channel tracking, and user scheduling.

Scheduling

Training High-Performance Low-Latency Spiking Neural Networks by Differentiation on Spike Representation

1 code implementation CVPR 2022 Qingyan Meng, Mingqing Xiao, Shen Yan, Yisen Wang, Zhouchen Lin, Zhi-Quan Luo

In this paper, we propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance that is competitive to ANNs yet with low latency.

Rethinking WMMSE: Can Its Complexity Scale Linearly With the Number of BS Antennas?

1 code implementation12 May 2022 Xiaotong Zhao, Siyuan Lu, Qingjiang Shi, Zhi-Quan Luo

Precoding design for maximizing weighted sum-rate (WSR) is a fundamental problem for downlink of massive multi-user multiple-input multiple-output (MU-MIMO) systems.

Efficient-Adam: Communication-Efficient Distributed Adam

no code implementations28 May 2022 Congliang Chen, Li Shen, Wei Liu, Zhi-Quan Luo

Distributed adaptive stochastic gradient methods have been widely used for large-scale nonconvex optimization, such as training deep learning models.

Quantization

Robust Adaptive Beamforming via Worst-Case SINR Maximization with Nonconvex Uncertainty Sets

no code implementations13 Jun 2022 Yongwei Huang, Hao Fu, Sergiy A. Vorobyov, Zhi-Quan Luo

Then a linear matrix inequality (LMI) relaxation for the QMI problem is proposed, with an additional valid linear constraint.

valid

Adam Can Converge Without Any Modification On Update Rules

no code implementations20 Aug 2022 Yushun Zhang, Congliang Chen, Naichen Shi, Ruoyu Sun, Zhi-Quan Luo

We point out there is a mismatch between the settings of theory and practice: Reddi et al. 2018 pick the problem after picking the hyperparameters of Adam, i. e., $(\beta_1, \beta_2)$; while practical applications often fix the problem first and then tune $(\beta_1, \beta_2)$.

Adaptive Smoothness-weighted Adversarial Training for Multiple Perturbations with Its Stability Analysis

1 code implementation2 Oct 2022 Jiancong Xiao, Zeyu Qin, Yanbo Fan, Baoyuan Wu, Jue Wang, Zhi-Quan Luo

Therefore, adversarial training for multiple perturbations (ATMP) is proposed to generalize the adversarial robustness over different perturbation types (in $\ell_1$, $\ell_2$, and $\ell_\infty$ norm-bounded perturbations).

Adversarial Robustness

Understanding Adversarial Robustness Against On-manifold Adversarial Examples

1 code implementation2 Oct 2022 Jiancong Xiao, Liusha Yang, Yanbo Fan, Jue Wang, Zhi-Quan Luo

On synthetic datasets, theoretically, We prove that on-manifold adversarial examples are powerful, yet adversarial training focuses on off-manifold directions and ignores the on-manifold adversarial examples.

Adversarial Robustness

Stability Analysis and Generalization Bounds of Adversarial Training

1 code implementation3 Oct 2022 Jiancong Xiao, Yanbo Fan, Ruoyu Sun, Jue Wang, Zhi-Quan Luo

In adversarial machine learning, deep neural networks can fit the adversarial examples on the training dataset but have poor generalization ability on the test set.

Generalization Bounds

When Expressivity Meets Trainability: Fewer than $n$ Neurons Can Work

no code implementations NeurIPS 2021 Jiawei Zhang, Yushun Zhang, Mingyi Hong, Ruoyu Sun, Zhi-Quan Luo

Third, we consider a constrained optimization formulation where the feasible region is the nice local region, and prove that every KKT point is a nearly global minimizer.

Bridging Distributional and Risk-sensitive Reinforcement Learning with Provable Regret Bounds

no code implementations25 Oct 2022 Hao Liang, Zhi-Quan Luo

We study the regret guarantee for risk-sensitive reinforcement learning (RSRL) via distributional reinforcement learning (DRL) methods.

Computational Efficiency Distributional Reinforcement Learning +2

Adversarial Rademacher Complexity of Deep Neural Networks

1 code implementation27 Nov 2022 Jiancong Xiao, Yanbo Fan, Ruoyu Sun, Zhi-Quan Luo

Specifically, we provide the first bound of adversarial Rademacher complexity of deep neural networks.

A Data Quality Assessment Framework for AI-enabled Wireless Communication

no code implementations13 Dec 2022 Hanning Tang, Liusha Yang, Rui Zhou, Jing Liang, Hong Wei, Xuan Wang, Qingjiang Shi, Zhi-Quan Luo

Using artificial intelligent (AI) to re-design and enhance the current wireless communication system is a promising pathway for the future sixth-generation (6G) wireless network.

Theoretical Analysis of Offline Imitation With Supplementary Dataset

1 code implementation27 Jan 2023 Ziniu Li, Tian Xu, Yang Yu, Zhi-Quan Luo

This paper considers a situation where, besides the small amount of expert data, a supplementary dataset is available, which can be collected cheaply from sub-optimal policies.

Imitation Learning

Invariant Layers for Graphs with Nodes of Different Types

no code implementations27 Feb 2023 Dmitry Rybin, Ruoyu Sun, Zhi-Quan Luo

We further narrow the invariant network design space by addressing a question about the sizes of tensor layers necessary for function approximation on graph data.

Towards Memory- and Time-Efficient Backpropagation for Training Spiking Neural Networks

1 code implementation ICCV 2023 Qingyan Meng, Mingqing Xiao, Shen Yan, Yisen Wang, Zhouchen Lin, Zhi-Quan Luo

In particular, our method achieves state-of-the-art accuracy on ImageNet, while the memory cost and training time are reduced by more than 70% and 50%, respectively, compared with BPTT.

A Physics-based and Data-driven Approach for Localized Statistical Channel Modeling

no code implementations4 Mar 2023 Shutao Zhang, Xinzhi Ning, Xi Zheng, Qingjiang Shi, Tsung-Hui Chang, Zhi-Quan Luo

Localized channel modeling is crucial for offline performance optimization of 5G cellular networks, but the existing channel models are for general scenarios and do not capture local geographical structures.

Regret Bounds for Risk-sensitive Reinforcement Learning with Lipschitz Dynamic Risk Measures

no code implementations4 Jun 2023 Hao Liang, Zhi-Quan Luo

We study finite episodic Markov decision processes incorporating dynamic risk measures to capture risk sensitivity.

reinforcement-learning

Provably Efficient Adversarial Imitation Learning with Unknown Transitions

1 code implementation11 Jun 2023 Tian Xu, Ziniu Li, Yang Yu, Zhi-Quan Luo

Adversarial imitation learning (AIL), a subset of IL methods, is particularly promising, but its theoretical foundation in the presence of unknown transitions has yet to be fully developed.

Imitation Learning

A Distribution Optimization Framework for Confidence Bounds of Risk Measures

no code implementations12 Jun 2023 Hao Liang, Zhi-Quan Luo

Unlike traditional approaches that add or subtract a confidence radius from the empirical risk measures, our proposed schemes evaluate a specific transformation of the empirical distribution based on the distance.

Decision Making

HyperAgent: A Simple, Scalable, Efficient and Provable Reinforcement Learning Framework for Complex Environments

no code implementations5 Feb 2024 Yingru Li, Jiawei Xu, Lei Han, Zhi-Quan Luo

To solve complex tasks under resource constraints, reinforcement learning (RL) agents need to be simple, efficient, and scalable, addressing (1) large state spaces and (2) the continuous accumulation of interaction data.

LEMMA Reinforcement Learning (RL)

Optimistic Thompson Sampling for No-Regret Learning in Unknown Games

no code implementations7 Feb 2024 Yingru Li, Liangqi Liu, Wenqiang Pu, Hao Liang, Zhi-Quan Luo

This work tackles the complexities of multi-player scenarios in \emph{unknown games}, where the primary challenge lies in navigating the uncertainty of the environment through bandit feedback alongside strategic decision-making.

Decision Making Thompson Sampling

Why Transformers Need Adam: A Hessian Perspective

1 code implementation26 Feb 2024 Yushun Zhang, Congliang Chen, Tian Ding, Ziniu Li, Ruoyu Sun, Zhi-Quan Luo

SGD performs worse than Adam by a significant margin on Transformers, but the reason remains unclear.

Radar Anti-jamming Strategy Learning via Domain-knowledge Enhanced Online Convex Optimization

no code implementations26 Feb 2024 Liangqi Liu, Wenqiang Pu, Yingru Li, Bo Jiu, Zhi-Quan Luo

The dynamic competition between radar and jammer systems presents a significant challenge for modern Electronic Warfare (EW), as current active learning approaches still lack sample efficiency and fail to exploit jammer's characteristics.

Active Learning

Prior-dependent analysis of posterior sampling reinforcement learning with function approximation

no code implementations17 Mar 2024 Yingru Li, Zhi-Quan Luo

This work advances randomized exploration in reinforcement learning (RL) with function approximation modeled by linear mixture MDPs.

reinforcement-learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.