Search Results for author: Zhaoran Wang

Found 134 papers, 13 papers with code

Breaking the Curse of Many Agents: Provable Mean Embedding $Q$-Iteration for Mean-Field Reinforcement Learning

no code implementations ICML 2020 Lingxiao Wang, Zhuoran Yang, Zhaoran Wang

We highlight that MF-FQI algorithm enjoys a ``blessing of many agents'' property in the sense that a larger number of observed agents improves the performance of MF-FQI algorithm.

Multi-agent Reinforcement Learning reinforcement-learning

Deep Reinforcement Learning with Smooth Policy

no code implementations ICML 2020 Qianli Shen, Yan Li, Haoming Jiang, Zhaoran Wang, Tuo Zhao

In contrast to policy parameterized by linear/reproducing kernel functions, where simple regularization techniques suffice to control smoothness, for neural network based reinforcement learning algorithms, there is no readily available solution to learn a smooth policy.

reinforcement-learning

Computational and Statistical Tradeoffs in Inferring Combinatorial Structures of Ising Model

no code implementations ICML 2020 Ying Jin, Zhaoran Wang, Junwei Lu

We study the computational and statistical tradeoffs in inferring combinatorial structures of high dimensional simple zero-field ferromagnetic Ising model.

Pessimism meets VCG: Learning Dynamic Mechanism Design via Offline Reinforcement Learning

no code implementations5 May 2022 Boxiang Lyu, Zhaoran Wang, Mladen Kolar, Zhuoran Yang

In the setting where the function approximation is employed to handle large state spaces, with only mild assumptions on the expressiveness of the function class, we are able to design a dynamic mechanism using offline reinforcement learning algorithms.

Offline RL reinforcement-learning

Sample-Efficient Reinforcement Learning for POMDPs with Linear Function Approximations

no code implementations20 Apr 2022 Qi Cai, Zhuoran Yang, Zhaoran Wang

In specific, we focus on a class of undercomplete POMDPs with linear function approximations, which allows the state and observation spaces to be infinite.

reinforcement-learning

Learn to Match with No Regret: Reinforcement Learning in Markov Matching Markets

no code implementations7 Mar 2022 Yifei Min, Tianhao Wang, Ruitu Xu, Zhaoran Wang, Michael I. Jordan, Zhuoran Yang

We study a Markov matching market involving a planner and a set of strategic agents on the two sides of the market.

reinforcement-learning

Learning Dynamic Mechanisms in Unknown Environments: A Reinforcement Learning Approach

no code implementations25 Feb 2022 Boxiang Lyu, Qinglin Meng, Shuang Qiu, Zhaoran Wang, Zhuoran Yang, Michael I. Jordan

Dynamic mechanism design studies how mechanism designers should allocate resources among agents in a time-varying environment.

reinforcement-learning

Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning

1 code implementation ICLR 2022 Chenjia Bai, Lingxiao Wang, Zhuoran Yang, Zhihong Deng, Animesh Garg, Peng Liu, Zhaoran Wang

We show that such OOD sampling and pessimistic bootstrapping yields provable uncertainty quantifier in linear MDPs, thus providing the theoretical underpinning for PBRL.

Offline RL reinforcement-learning

Sequential Information Design: Markov Persuasion Process and Its Efficient Reinforcement Learning

no code implementations22 Feb 2022 Jibang Wu, Zixuan Zhang, Zhe Feng, Zhaoran Wang, Zhuoran Yang, Michael I. Jordan, Haifeng Xu

This paper proposes a novel model of sequential information design, namely the Markov persuasion processes (MPPs), where a sender, with informational advantage, seeks to persuade a stream of myopic receivers to take actions that maximizes the sender's cumulative utilities in a finite horizon Markovian environment with varying prior and utility functions.

reinforcement-learning

Pessimistic Minimax Value Iteration: Provably Efficient Equilibrium Learning from Offline Datasets

no code implementations15 Feb 2022 Han Zhong, Wei Xiong, Jiyuan Tan, LiWei Wang, Tong Zhang, Zhaoran Wang, Zhuoran Yang

When the dataset does not have uniform coverage over all policy pairs, finding an approximate NE involves challenges in three aspects: (i) distributional shift between the behavior policy and the optimal policy, (ii) function approximation to handle large state space, and (iii) minimax optimization for equilibrium solving.

Joint Differentiable Optimization and Verification for Certified Reinforcement Learning

no code implementations28 Jan 2022 YiXuan Wang, Chao Huang, Zhaoran Wang, Zhuoran Yang, Qi Zhu

In model-based reinforcement learning for safety-critical control systems, it is important to formally certify system properties (e. g., safety, stability) under the learned controller.

Bilevel Optimization Model-based Reinforcement Learning +1

Exponential Family Model-Based Reinforcement Learning via Score Matching

no code implementations28 Dec 2021 Gene Li, Junbo Li, Nathan Srebro, Zhaoran Wang, Zhuoran Yang

We propose an optimistic model-based algorithm, dubbed SMRL, for finite-horizon episodic reinforcement learning (RL) when the transition model is specified by exponential family distributions with $d$ parameters and the reward is bounded and known.

Density Estimation Model-based Reinforcement Learning +1

Can Reinforcement Learning Find Stackelberg-Nash Equilibria in General-Sum Markov Games with Myopic Followers?

no code implementations27 Dec 2021 Han Zhong, Zhuoran Yang, Zhaoran Wang, Michael I. Jordan

We develop sample-efficient reinforcement learning (RL) algorithms for solving for an SNE in both online and offline settings.

reinforcement-learning

Wasserstein Flow Meets Replicator Dynamics: A Mean-Field Analysis of Representation Learning in Actor-Critic

no code implementations NeurIPS 2021 Yufeng Zhang, Siyu Chen, Zhuoran Yang, Michael I. Jordan, Zhaoran Wang

Specifically, we consider a version of AC where the actor and critic are represented by overparameterized two-layer neural networks and are updated with two-timescale learning rates.

Representation Learning

ElegantRL-Podracer: Scalable and Elastic Library for Cloud-Native Deep Reinforcement Learning

no code implementations11 Dec 2021 Xiao-Yang Liu, Zechu Li, Zhuoran Yang, Jiahao Zheng, Zhaoran Wang, Anwar Walid, Jian Guo, Michael I. Jordan

In this paper, we present a scalable and elastic library ElegantRL-podracer for cloud-native deep reinforcement learning, which efficiently supports millions of GPU cores to carry out massively parallel training at multiple levels.

reinforcement-learning

Offline Constrained Multi-Objective Reinforcement Learning via Pessimistic Dual Value Iteration

no code implementations NeurIPS 2021 Runzhe Wu, Yufeng Zhang, Zhuoran Yang, Zhaoran Wang

In constrained multi-objective RL, the goal is to learn a policy that achieves the best performance specified by a multi-objective preference function under a constraint.

reinforcement-learning

Pessimism Meets Invariance: Provably Efficient Offline Mean-Field Multi-Agent RL

1 code implementation NeurIPS 2021 Minshuo Chen, Yan Li, Ethan Wang, Zhuoran Yang, Zhaoran Wang, Tuo Zhao

Theoretically, under a weak coverage assumption that the experience dataset contains enough information about the optimal policy, we prove that for an episodic mean-field MDP with a horizon $H$ and $N$ training trajectories, SAFARI attains a sub-optimality gap of $\mathcal{O}(H^2d_{\rm eff} /\sqrt{N})$, where $d_{\rm eff}$ is the effective dimension of the function class for parameterizing the value function, but independent on the number of agents.

Multi-agent Reinforcement Learning

BooVI: Provably Efficient Bootstrapped Value Iteration

no code implementations NeurIPS 2021 Boyi Liu, Qi Cai, Zhuoran Yang, Zhaoran Wang

Despite the tremendous success of reinforcement learning (RL) with function approximation, efficient exploration remains a significant challenge, both practically and theoretically.

Efficient Exploration reinforcement-learning

FinRL-Podracer: High Performance and Scalable Deep Reinforcement Learning for Quantitative Finance

no code implementations7 Nov 2021 Zechu Li, Xiao-Yang Liu, Jiahao Zheng, Zhaoran Wang, Anwar Walid, Jian Guo

Unfortunately, the steep learning curve and the difficulty in quick modeling and agile development are impeding finance researchers from using deep reinforcement learning in quantitative trading.

reinforcement-learning Stock Trend Prediction

Exponential Bellman Equation and Improved Regret Bounds for Risk-Sensitive Reinforcement Learning

no code implementations NeurIPS 2021 Yingjie Fei, Zhuoran Yang, Yudong Chen, Zhaoran Wang

The exponential Bellman equation inspires us to develop a novel analysis of Bellman backup procedures in risk-sensitive RL algorithms, and further motivates the design of a novel exploration mechanism.

reinforcement-learning

SCORE: Spurious COrrelation REduction for Offline Reinforcement Learning

1 code implementation24 Oct 2021 Zhihong Deng, Zuyue Fu, Lingxiao Wang, Zhuoran Yang, Chenjia Bai, Zhaoran Wang, Jing Jiang

Offline reinforcement learning (RL) aims to learn the optimal policy from a pre-collected dataset without online interactions.

Offline RL reinforcement-learning

Dynamic Bottleneck for Robust Self-Supervised Exploration

1 code implementation NeurIPS 2021 Chenjia Bai, Lingxiao Wang, Lei Han, Animesh Garg, Jianye Hao, Peng Liu, Zhaoran Wang

Exploration methods based on pseudo-count of transitions or curiosity of dynamics have achieved promising results in solving reinforcement learning with sparse rewards.

reinforcement-learning

On Reward-Free RL with Kernel and Neural Function Approximations: Single-Agent MDP and Markov Game

no code implementations19 Oct 2021 Shuang Qiu, Jieping Ye, Zhaoran Wang, Zhuoran Yang

Then, given any extrinsic reward, the agent computes the policy via a planning algorithm with offline data collected in the exploration phase.

Inducing Equilibria via Incentives: Simultaneous Design-and-Play Finds Global Optima

no code implementations4 Oct 2021 Boyi Liu, Jiayang Li, Zhuoran Yang, Hoi-To Wai, Mingyi Hong, Yu Marco Nie, Zhaoran Wang

To regulate a social system comprised of self-interested agents, economic incentives (e. g., taxes, tolls, and subsidies) are often required to induce a desirable outcome.

A Principled Permutation Invariant Approach to Mean-Field Multi-Agent Reinforcement Learning

no code implementations29 Sep 2021 Yan Li, Lingxiao Wang, Jiachen Yang, Ethan Wang, Zhaoran Wang, Tuo Zhao, Hongyuan Zha

To exploit the permutation invariance therein, we propose the mean-field proximal policy optimization (MF-PPO) algorithm, at the core of which is a permutation- invariant actor-critic neural architecture.

Multi-agent Reinforcement Learning reinforcement-learning

Can Reinforcement Learning Efficiently Find Stackelberg-Nash Equilibria in General-Sum Markov Games?

no code implementations29 Sep 2021 Han Zhong, Zhuoran Yang, Zhaoran Wang, Michael Jordan

To our best knowledge, we establish the first provably efficient RL algorithms for solving SNE in general-sum Markov games with leader-controlled state transitions.

reinforcement-learning

Provably Efficient Generative Adversarial Imitation Learning for Online and Offline Setting with Linear Function Approximation

no code implementations19 Aug 2021 Zhihan Liu, Yufeng Zhang, Zuyue Fu, Zhuoran Yang, Zhaoran Wang

In generative adversarial imitation learning (GAIL), the agent aims to learn a policy from an expert demonstration so that its performance cannot be discriminated from the expert policy on a certain predefined reward set.

Imitation Learning

Online Bootstrap Inference For Policy Evaluation in Reinforcement Learning

no code implementations8 Aug 2021 Pratik Ramprasad, Yuantong Li, Zhuoran Yang, Zhaoran Wang, Will Wei Sun, Guang Cheng

The recent emergence of reinforcement learning has created a demand for robust statistical inference methods for the parameter estimates computed using these algorithms.

online learning reinforcement-learning

Towards General Function Approximation in Zero-Sum Markov Games

no code implementations ICLR 2022 Baihe Huang, Jason D. Lee, Zhaoran Wang, Zhuoran Yang

In the {coordinated} setting where both players are controlled by the agent, we propose a model-based algorithm and a model-free algorithm.

A Unified Off-Policy Evaluation Approach for General Value Function

no code implementations6 Jul 2021 Tengyu Xu, Zhuoran Yang, Zhaoran Wang, Yingbin Liang

We further show that unlike GTD, the learned GVFs by GenTD are guaranteed to converge to the ground truth GVFs as long as the function approximation power is sufficiently large.

Anomaly Detection

Gap-Dependent Bounds for Two-Player Markov Games

no code implementations1 Jul 2021 Zehao Dou, Zhuoran Yang, Zhaoran Wang, Simon S. Du

As one of the most popular methods in the field of reinforcement learning, Q-learning has received increasing attention.

Q-Learning reinforcement-learning

Randomized Exploration for Reinforcement Learning with General Value Function Approximation

no code implementations15 Jun 2021 Haque Ishfaq, Qiwen Cui, Viet Nguyen, Alex Ayoub, Zhuoran Yang, Zhaoran Wang, Doina Precup, Lin F. Yang

We propose a model-free reinforcement learning algorithm inspired by the popular randomized least squares value iteration (RLSVI) algorithm as well as the optimism principle.

reinforcement-learning

Verification in the Loop: Correct-by-Construction Control Learning with Reach-avoid Guarantees

no code implementations6 Jun 2021 YiXuan Wang, Chao Huang, Zhaoran Wang, Zhilu Wang, Qi Zhu

Specifically, we leverage the verification results (computed reachable set of the system state) to construct feedback metrics for control learning, which measure how likely the current design of control parameters can meet the required reach-avoid property for safety and goal-reaching.

Permutation Invariant Policy Optimization for Mean-Field Multi-Agent Reinforcement Learning: A Principled Approach

no code implementations18 May 2021 Yan Li, Lingxiao Wang, Jiachen Yang, Ethan Wang, Zhaoran Wang, Tuo Zhao, Hongyuan Zha

To exploit the permutation invariance therein, we propose the mean-field proximal policy optimization (MF-PPO) algorithm, at the core of which is a permutation-invariant actor-critic neural architecture.

Multi-agent Reinforcement Learning reinforcement-learning

Principled Exploration via Optimistic Bootstrapping and Backward Induction

1 code implementation13 May 2021 Chenjia Bai, Lingxiao Wang, Lei Han, Jianye Hao, Animesh Garg, Peng Liu, Zhaoran Wang

In this paper, we propose a principled exploration method for DRL through Optimistic Bootstrapping and Backward Induction (OB2I).

Efficient Exploration

Doubly Robust Off-Policy Actor-Critic: Convergence and Optimality

no code implementations23 Feb 2021 Tengyu Xu, Zhuoran Yang, Zhaoran Wang, Yingbin Liang

We also show that the overall convergence of DR-Off-PAC is doubly robust to the approximation errors that depend only on the expressive power of approximation functions.

Instrumental Variable Value Iteration for Causal Offline Reinforcement Learning

no code implementations19 Feb 2021 Luofeng Liao, Zuyue Fu, Zhuoran Yang, Yixin Wang, Mladen Kolar, Zhaoran Wang

Instrumental variables (IVs), in the context of RL, are the variables whose influence on the state variables are all mediated through the action.

Offline RL reinforcement-learning

A Primal-Dual Approach to Constrained Markov Decision Processes

no code implementations26 Jan 2021 Yi Chen, Jing Dong, Zhaoran Wang

In many operations management problems, we need to make decisions sequentially to minimize the cost while satisfying certain constraints.

Optimization and Control

Offline Policy Optimization with Variance Regularization

no code implementations1 Jan 2021 Riashat Islam, Samarth Sinha, Homanga Bharadhwaj, Samin Yeasar Arnob, Zhuoran Yang, Zhaoran Wang, Animesh Garg, Lihong Li, Doina Precup

Learning policies from fixed offline datasets is a key challenge to scale up reinforcement learning (RL) algorithms towards practical applications.

Continuous Control Offline RL +1

Optimistic Policy Optimization with General Function Approximations

no code implementations1 Jan 2021 Qi Cai, Zhuoran Yang, Csaba Szepesvari, Zhaoran Wang

Although policy optimization with neural networks has a track record of achieving state-of-the-art results in reinforcement learning on various domains, the theoretical understanding of the computational and sample efficiency of policy optimization remains restricted to linear function approximations with finite-dimensional feature representations, which hinders the design of principled, effective, and efficient algorithms.

reinforcement-learning

Policy Optimization in Zero-Sum Markov Games: Fictitious Self-Play Provably Attains Nash Equilibria

no code implementations1 Jan 2021 Boyi Liu, Zhuoran Yang, Zhaoran Wang

Specifically, in each iteration, each player infers the policy of the opponent implicitly via policy evaluation and improves its current policy by taking the smoothed best-response via a proximal policy optimization (PPO) step.

Optimistic Exploration with Backward Bootstrapped Bonus for Deep Reinforcement Learning

no code implementations1 Jan 2021 Chenjia Bai, Lingxiao Wang, Peng Liu, Zhaoran Wang, Jianye Hao, Yingnan Zhao

However, such an approach is challenging in developing practical exploration algorithms for Deep Reinforcement Learning (DRL).

Atari Games Efficient Exploration +2

Provably Training Neural Network Classifiers under Fairness Constraints

no code implementations30 Dec 2020 You-Lin Chen, Zhaoran Wang, Mladen Kolar

Training a classifier under fairness constraints has gotten increasing attention in the machine learning community thanks to moral, legal, and business reasons.

Fairness online learning

Is Pessimism Provably Efficient for Offline RL?

no code implementations30 Dec 2020 Ying Jin, Zhuoran Yang, Zhaoran Wang

We study offline reinforcement learning (RL), which aims to learn an optimal policy based on a dataset collected a priori.

Offline RL

Risk-Sensitive Deep RL: Variance-Constrained Actor-Critic Provably Finds Globally Optimal Policy

no code implementations28 Dec 2020 Han Zhong, Ethan X. Fang, Zhuoran Yang, Zhaoran Wang

In particular, we focus on a variance-constrained policy optimization problem where the goal is to find a policy that maximizes the expected value of the long-run average reward, subject to a constraint that the long-run variance of the average reward is upper bounded by a threshold.

reinforcement-learning

Variational Transport: A Convergent Particle-BasedAlgorithm for Distributional Optimization

no code implementations21 Dec 2020 Zhuoran Yang, Yufeng Zhang, Yongxin Chen, Zhaoran Wang

Specifically, we prove that moving along the geodesic in the direction of functional gradient with respect to the second-order Wasserstein distance is equivalent to applying a pushforward mapping to a probability distribution, which can be approximated accurately by pushing a set of particles.

Variational Inference

Can Temporal-Difference and Q-Learning Learn Representation? A Mean-Field Theory

no code implementations NeurIPS 2020 Yufeng Zhang, Qi Cai, Zhuoran Yang, Yongxin Chen, Zhaoran Wang

Temporal-difference and Q-learning play a key role in deep reinforcement learning, where they are empowered by expressive nonlinear function approximators such as neural networks.

Q-Learning reinforcement-learning

Provably Efficient Reinforcement Learning with Kernel and Neural Function Approximations

no code implementations NeurIPS 2020 Zhuoran Yang, Chi Jin, Zhaoran Wang, Mengdi Wang, Michael Jordan

Reinforcement learning (RL) algorithms combined with modern function approximators such as kernel functions and deep neural networks have achieved significant empirical successes in large-scale application problems with a massive number of states.

reinforcement-learning

Provably Efficient Neural GTD for Off-Policy Learning

no code implementations NeurIPS 2020 Hoi-To Wai, Zhuoran Yang, Zhaoran Wang, Mingyi Hong

This paper studies a gradient temporal difference (GTD) algorithm using neural network (NN) function approximators to minimize the mean squared Bellman error (MSBE).

Provably Efficient Neural Estimation of Structural Equation Models: An Adversarial Approach

no code implementations NeurIPS 2020 Luofeng Liao, You-Lin Chen, Zhuoran Yang, Bo Dai, Mladen Kolar, Zhaoran Wang

We study estimation in a class of generalized SEMs where the object of interest is defined as the solution to a linear operator equation.

online learning

On Function Approximation in Reinforcement Learning: Optimism in the Face of Large State Spaces

no code implementations9 Nov 2020 Zhuoran Yang, Chi Jin, Zhaoran Wang, Mengdi Wang, Michael I. Jordan

The classical theory of reinforcement learning (RL) has focused on tabular and linear representations of value functions.

reinforcement-learning

End-to-End Learning and Intervention in Games

no code implementations NeurIPS 2020 Jiayang Li, Jing Yu, Yu, Nie, Zhaoran Wang

In this paper, we provide a unified framework for learning and intervention in games.

Variational Dynamic for Self-Supervised Exploration in Deep Reinforcement Learning

no code implementations17 Oct 2020 Chenjia Bai, Peng Liu, Kaiyu Liu, Lingxiao Wang, Yingnan Zhao, Lei Han, Zhaoran Wang

Efficient exploration remains a challenging problem in reinforcement learning, especially for tasks where extrinsic rewards from environments are sparse or even totally disregarded.

Efficient Exploration reinforcement-learning +1

Provable Fictitious Play for General Mean-Field Games

no code implementations8 Oct 2020 Qiaomin Xie, Zhuoran Yang, Zhaoran Wang, Andreea Minca

We propose a reinforcement learning algorithm for stationary mean-field games, where the goal is to learn a pair of mean-field state and stationary policy that constitutes the Nash equilibrium.

reinforcement-learning

Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth Nonlinear TD Learning

no code implementations23 Aug 2020 Shuang Qiu, Zhuoran Yang, Xiaohan Wei, Jieping Ye, Zhaoran Wang

Existing approaches for this problem are based on two-timescale or double-loop stochastic gradient algorithms, which may also require sampling large-batch data.

Global Convergence of Policy Gradient for Linear-Quadratic Mean-Field Control/Game in Continuous Time

no code implementations16 Aug 2020 Weichen Wang, Jiequn Han, Zhuoran Yang, Zhaoran Wang

Reinforcement learning is a powerful tool to learn the optimal policy of possibly multiple agents by interacting with the environment.

Single-Timescale Actor-Critic Provably Finds Globally Optimal Policy

no code implementations ICLR 2021 Zuyue Fu, Zhuoran Yang, Zhaoran Wang

To the best of our knowledge, we establish the rate of convergence and global optimality of single-timescale actor-critic with linear function approximation for the first time.

A Two-Timescale Framework for Bilevel Optimization: Complexity Analysis and Application to Actor-Critic

no code implementations10 Jul 2020 Mingyi Hong, Hoi-To Wai, Zhaoran Wang, Zhuoran Yang

Bilevel optimization is a class of problems which exhibit a two-level structure, and its goal is to minimize an outer objective function with variables which are constrained to be the optimal solution to an (inner) optimization problem.

Bilevel Optimization Hyperparameter Optimization

Accelerating Nonconvex Learning via Replica Exchange Langevin Diffusion

no code implementations ICLR 2019 Yi Chen, Jinglin Chen, Jing Dong, Jian Peng, Zhaoran Wang

To attain the advantages of both regimes, we propose to use replica exchange, which swaps between two Langevin diffusions with different temperatures.

Provably Efficient Neural Estimation of Structural Equation Model: An Adversarial Approach

no code implementations2 Jul 2020 Luofeng Liao, You-Lin Chen, Zhuoran Yang, Bo Dai, Zhaoran Wang, Mladen Kolar

We study estimation in a class of generalized SEMs where the object of interest is defined as the solution to a linear operator equation.

online learning

Dynamic Regret of Policy Optimization in Non-stationary Environments

no code implementations NeurIPS 2020 Yingjie Fei, Zhuoran Yang, Zhaoran Wang, Qiaomin Xie

We consider reinforcement learning (RL) in episodic MDPs with adversarial full-information reward feedback and unknown fixed transition kernels.

On the Global Optimality of Model-Agnostic Meta-Learning

no code implementations ICML 2020 Lingxiao Wang, Qi Cai, Zhuoran Yang, Zhaoran Wang

Model-agnostic meta-learning (MAML) formulates meta-learning as a bilevel optimization problem, where the inner level solves each subtask based on a shared prior, while the outer level searches for the optimal shared prior by optimizing its aggregated performance over all the subtasks.

Bilevel Optimization Meta-Learning

Provably Efficient Causal Reinforcement Learning with Confounded Observational Data

no code implementations NeurIPS 2021 Lingxiao Wang, Zhuoran Yang, Zhaoran Wang

Empowered by expressive function approximators such as neural networks, deep reinforcement learning (DRL) achieves tremendous empirical successes.

Autonomous Driving reinforcement-learning

Risk-Sensitive Reinforcement Learning: Near-Optimal Risk-Sample Tradeoff in Regret

no code implementations NeurIPS 2020 Yingjie Fei, Zhuoran Yang, Yudong Chen, Zhaoran Wang, Qiaomin Xie

We study risk-sensitive reinforcement learning in episodic Markov decision processes with unknown transition kernels, where the goal is to optimize the total reward under the risk measure of exponential utility.

Q-Learning reinforcement-learning

Breaking the Curse of Many Agents: Provable Mean Embedding Q-Iteration for Mean-Field Reinforcement Learning

no code implementations21 Jun 2020 Lingxiao Wang, Zhuoran Yang, Zhaoran Wang

We highlight that MF-FQI algorithm enjoys a "blessing of many agents" property in the sense that a larger number of observed agents improves the performance of MF-FQI algorithm.

Multi-agent Reinforcement Learning reinforcement-learning

Neural Certificates for Safe Control Policies

no code implementations15 Jun 2020 Wanxin Jin, Zhaoran Wang, Zhuoran Yang, Shaoshuai Mou

This paper develops an approach to learn a policy of a dynamical system that is guaranteed to be both provably safe and goal-reaching.

Can Temporal-Difference and Q-Learning Learn Representation? A Mean-Field Theory

no code implementations8 Jun 2020 Yufeng Zhang, Qi Cai, Zhuoran Yang, Yongxin Chen, Zhaoran Wang

We aim to answer the following questions: When the function approximator is a neural network, how does the associated feature representation evolve?

Q-Learning

Deep Reinforcement Learning with Robust and Smooth Policy

no code implementations21 Mar 2020 Qianli Shen, Yan Li, Haoming Jiang, Zhaoran Wang, Tuo Zhao

Deep reinforcement learning (RL) has achieved great empirical successes in various domains.

reinforcement-learning

Generative Adversarial Imitation Learning with Neural Networks: Global Optimality and Convergence Rate

no code implementations8 Mar 2020 Yufeng Zhang, Qi Cai, Zhuoran Yang, Zhaoran Wang

Generative adversarial imitation learning (GAIL) demonstrates tremendous success in practice, especially when combined with neural networks.

Imitation Learning reinforcement-learning

Semiparametric Nonlinear Bipartite Graph Representation Learning with Provable Guarantees

no code implementations ICML 2020 Sen Na, Yuwei Luo, Zhuoran Yang, Zhaoran Wang, Mladen Kolar

We consider the bipartite graph and formalize its representation learning problem as a statistical estimation problem of parameters in a semiparametric exponential family distribution.

Graph Representation Learning

Upper Confidence Primal-Dual Reinforcement Learning for CMDP with Adversarial Loss

no code implementations NeurIPS 2020 Shuang Qiu, Xiaohan Wei, Zhuoran Yang, Jieping Ye, Zhaoran Wang

In particular, we prove that the proposed algorithm achieves $\widetilde{\mathcal{O}}(L|\mathcal{S}|\sqrt{|\mathcal{A}|T})$ upper bounds of both the regret and the constraint violation, where $L$ is the length of each episode.

online learning reinforcement-learning

Provably Efficient Safe Exploration via Primal-Dual Policy Optimization

no code implementations1 Mar 2020 Dongsheng Ding, Xiaohan Wei, Zhuoran Yang, Zhaoran Wang, Mihailo R. Jovanović

To this end, we present an \underline{O}ptimistic \underline{P}rimal-\underline{D}ual Proximal Policy \underline{OP}timization (OPDOP) algorithm where the value function is estimated by combining the least-squares policy evaluation and an additional bonus term for safe exploration.

Safe Exploration Safe Reinforcement Learning

Learning Zero-Sum Simultaneous-Move Markov Games Using Function Approximation and Correlated Equilibrium

no code implementations17 Feb 2020 Qiaomin Xie, Yudong Chen, Zhaoran Wang, Zhuoran Yang

In the offline setting, we control both players and aim to find the Nash Equilibrium by minimizing the duality gap.

Pontryagin Differentiable Programming: An End-to-End Learning and Control Framework

1 code implementation NeurIPS 2020 Wanxin Jin, Zhaoran Wang, Zhuoran Yang, Shaoshuai Mou

This paper develops a Pontryagin Differentiable Programming (PDP) methodology, which establishes a unified framework to solve a broad class of learning and control tasks.

Provably Efficient Exploration in Policy Optimization

no code implementations ICML 2020 Qi Cai, Zhuoran Yang, Chi Jin, Zhaoran Wang

While policy-based reinforcement learning (RL) achieves tremendous successes in practice, it is significantly less understood in theory, especially compared with value-based RL.

Efficient Exploration reinforcement-learning

Variance Reduced Policy Evaluation with Smooth Function Approximation

no code implementations NeurIPS 2019 Hoi-To Wai, Mingyi Hong, Zhuoran Yang, Zhaoran Wang, Kexin Tang

Policy evaluation with smooth and nonlinear function approximation has shown great potential for reinforcement learning.

reinforcement-learning

Neural Temporal-Difference Learning Converges to Global Optima

no code implementations NeurIPS 2019 Qi Cai, Zhuoran Yang, Jason D. Lee, Zhaoran Wang

Temporal-difference learning (TD), coupled with neural networks, is among the most fundamental building blocks of deep reinforcement learning.

Q-Learning reinforcement-learning

Neural Trust Region/Proximal Policy Optimization Attains Globally Optimal Policy

no code implementations NeurIPS 2019 Boyi Liu, Qi Cai, Zhuoran Yang, Zhaoran Wang

Proximal policy optimization and trust region policy optimization (PPO and TRPO) with actor and critic parametrized by neural networks achieve significant empirical success in deep reinforcement learning.

reinforcement-learning

Statistical-Computational Tradeoff in Single Index Models

no code implementations NeurIPS 2019 Lingxiao Wang, Zhuoran Yang, Zhaoran Wang

Using the statistical query model to characterize the computational cost of an algorithm, we show that when $\cov(Y, X^\top\beta^*)=0$ and $\cov(Y,(X^\top\beta^*)^2)>0$, no computationally tractable algorithms can achieve the information-theoretic limit of the minimax risk.

Convergent Policy Optimization for Safe Reinforcement Learning

1 code implementation NeurIPS 2019 Ming Yu, Zhuoran Yang, Mladen Kolar, Zhaoran Wang

We study the safe reinforcement learning problem with nonlinear function approximation, where policy optimization is formulated as a constrained optimization problem with both the objective and the constraint being nonconvex functions.

Multi-agent Reinforcement Learning reinforcement-learning +1

Actor-Critic Provably Finds Nash Equilibria of Linear-Quadratic Mean-Field Games

no code implementations ICLR 2020 Zuyue Fu, Zhuoran Yang, Yongxin Chen, Zhaoran Wang

We study discrete-time mean-field Markov games with infinite numbers of agents where each agent aims to minimize its ergodic cost.

Sample Elicitation

1 code implementation8 Oct 2019 Jiaheng Wei, Zuyue Fu, Yang Liu, Xingyu Li, Zhuoran Yang, Zhaoran Wang

We also show a connection between this sample elicitation problem and $f$-GAN, and how this connection can help reconstruct an estimator of the distribution based on collected samples.

Credible Sample Elicitation by Deep Learning, for Deep Learning

no code implementations25 Sep 2019 Yang Liu, Zuyue Fu, Zhuoran Yang, Zhaoran Wang

While classical elicitation results apply to eliciting a complex and generative (and continuous) distribution $p(x)$ for this image data, we are interested in eliciting samples $x_i \sim p(x)$ from agents.

Neural Policy Gradient Methods: Global Optimality and Rates of Convergence

no code implementations ICLR 2020 Lingxiao Wang, Qi Cai, Zhuoran Yang, Zhaoran Wang

In detail, we prove that neural natural policy gradient converges to a globally optimal policy at a sublinear rate.

Policy Gradient Methods

Fast Multi-Agent Temporal-Difference Learning via Homotopy Stochastic Primal-Dual Optimization

no code implementations7 Aug 2019 Dongsheng Ding, Xiaohan Wei, Zhuoran Yang, Zhaoran Wang, Mihailo R. Jovanović

We study the policy evaluation problem in multi-agent reinforcement learning where a group of agents, with jointly observed states and private local actions and rewards, collaborate to learn the value function of a given policy via local computation and communication over a connected undirected network.

Multi-agent Reinforcement Learning Stochastic Optimization

More Supervision, Less Computation: Statistical-Computational Tradeoffs in Weakly Supervised Learning

no code implementations NeurIPS 2016 Xinyang Yi, Zhaoran Wang, Zhuoran Yang, Constantine Caramanis, Han Liu

We consider the weakly supervised binary classification problem where the labels are randomly flipped with probability $1- {\alpha}$.

Provably Efficient Reinforcement Learning with Linear Function Approximation

1 code implementation11 Jul 2019 Chi Jin, Zhuoran Yang, Zhaoran Wang, Michael. I. Jordan

Modern Reinforcement Learning (RL) is commonly applied to practical problems with an enormous number of states, where function approximation must be deployed to approximate either the value function or the policy.

reinforcement-learning

A Communication-Efficient Multi-Agent Actor-Critic Algorithm for Distributed Reinforcement Learning

no code implementations6 Jul 2019 Yixuan Lin, Kaiqing Zhang, Zhuoran Yang, Zhaoran Wang, Tamer Başar, Romeil Sandhu, Ji Liu

This paper considers a distributed reinforcement learning problem in which a network of multiple agents aim to cooperatively maximize the globally averaged return through communication with only local neighbors.

reinforcement-learning

Neural Proximal/Trust Region Policy Optimization Attains Globally Optimal Policy

no code implementations25 Jun 2019 Boyi Liu, Qi Cai, Zhuoran Yang, Zhaoran Wang

Proximal policy optimization and trust region policy optimization (PPO and TRPO) with actor and critic parametrized by neural networks achieve significant empirical success in deep reinforcement learning.

reinforcement-learning

Neural Temporal-Difference and Q-Learning Provably Converge to Global Optima

1 code implementation NeurIPS 2019 Qi Cai, Zhuoran Yang, Jason D. Lee, Zhaoran Wang

Temporal-difference learning (TD), coupled with neural networks, is among the most fundamental building blocks of deep reinforcement learning.

Q-Learning reinforcement-learning

On Tighter Generalization Bounds for Deep Neural Networks: CNNs, ResNets, and Beyond

no code implementations ICLR 2019 Xingguo Li, Junwei Lu, Zhaoran Wang, Jarvis Haupt, Tuo Zhao

We propose a generalization error bound for a general family of deep neural networks based on the depth and width of the networks, as well as the spectral norm of weight matrices.

Generalization Bounds

A Multi-Agent Off-Policy Actor-Critic Algorithm for Distributed Reinforcement Learning

1 code implementation15 Mar 2019 Wesley Suttle, Zhuoran Yang, Kaiqing Zhang, Zhaoran Wang, Tamer Basar, Ji Liu

This paper extends off-policy reinforcement learning to the multi-agent case in which a set of networked agents communicating with their neighbors according to a time-varying graph collaboratively evaluates and improves a target policy while following a distinct behavior policy.

reinforcement-learning

On the Global Convergence of Imitation Learning: A Case for Linear Quadratic Regulator

no code implementations11 Jan 2019 Qi Cai, Mingyi Hong, Yongxin Chen, Zhaoran Wang

We study the global convergence of generative adversarial imitation learning for linear quadratic regulators, which is posed as minimax optimization.

Imitation Learning reinforcement-learning

A Theoretical Analysis of Deep Q-Learning

no code implementations1 Jan 2019 Jianqing Fan, Zhaoran Wang, Yuchen Xie, Zhuoran Yang

Despite the great empirical success of deep reinforcement learning, its theoretical foundation is less well understood.

Q-Learning

Provable Gaussian Embedding with One Observation

no code implementations NeurIPS 2018 Ming Yu, Zhuoran Yang, Tuo Zhao, Mladen Kolar, Zhaoran Wang

In this paper, we study the Gaussian embedding model and develop the first theoretical results for exponential family embedding models.

High-dimensional Varying Index Coefficient Models via Stein's Identity

1 code implementation16 Oct 2018 Sen Na, Zhuoran Yang, Zhaoran Wang, Mladen Kolar

We study the parameter estimation problem for a varying index coefficient model in high dimensions.

A convex formulation for high-dimensional sparse sliced inverse regression

no code implementations17 Sep 2018 Kean Ming Tan, Zhaoran Wang, Tong Zhang, Han Liu, R. Dennis Cook

Sliced inverse regression is a popular tool for sufficient dimension reduction, which replaces covariates with a minimal set of their linear combinations without loss of information on the conditional distribution of the response given the covariates.

Dimensionality Reduction Variable Selection

Online ICA: Understanding Global Dynamics of Nonconvex Optimization via Diffusion Processes

no code implementations NeurIPS 2016 Chris Junchi Li, Zhaoran Wang, Han Liu

Despite the empirical success of nonconvex statistical optimization methods, their global dynamics, especially convergence to the desirable local minima, remain less well understood in theory.

Tensor Decomposition

Curse of Heterogeneity: Computational Barriers in Sparse Mixture Models and Phase Retrieval

no code implementations21 Aug 2018 Jianqing Fan, Han Liu, Zhaoran Wang, Zhuoran Yang

We study the fundamental tradeoffs between statistical accuracy and computational tractability in the analysis of high dimensional heterogeneous data.

The Edge Density Barrier: Computational-Statistical Tradeoffs in Combinatorial Inference

no code implementations ICML 2018 Hao Lu, Yuan Cao, Zhuoran Yang, Junwei Lu, Han Liu, Zhaoran Wang

We study the hypothesis testing problem of inferring the existence of combinatorial structures in undirected graphical models.

Two-sample testing

On Tighter Generalization Bound for Deep Neural Networks: CNNs, ResNets, and Beyond

no code implementations13 Jun 2018 Xingguo Li, Junwei Lu, Zhaoran Wang, Jarvis Haupt, Tuo Zhao

We establish a margin based data dependent generalization error bound for a general family of deep neural networks in terms of the depth and width, as well as the Jacobian of the networks.

Generalization Bounds

Multi-Agent Reinforcement Learning via Double Averaging Primal-Dual Optimization

no code implementations NeurIPS 2018 Hoi-To Wai, Zhuoran Yang, Zhaoran Wang, Mingyi Hong

Despite the success of single-agent reinforcement learning, multi-agent reinforcement learning (MARL) remains challenging due to complex interactions between agents.

Multi-agent Reinforcement Learning reinforcement-learning

Detecting Nonlinear Causality in Multivariate Time Series with Sparse Additive Models

no code implementations11 Mar 2018 Yingxiang Yang, Adams Wei Yu, Zhaoran Wang, Tuo Zhao

We propose a nonparametric method for detecting nonlinear causal relationship within a set of multidimensional discrete time series, by using sparse additive models (SpAMs).

Additive models Model Selection +1

Misspecified Nonconvex Statistical Optimization for Phase Retrieval

no code implementations18 Dec 2017 Zhuoran Yang, Lin F. Yang, Ethan X. Fang, Tuo Zhao, Zhaoran Wang, Matey Neykov

Existing nonconvex statistical optimization theory and methods crucially rely on the correct specification of the underlying "true" statistical models.

Symmetry, Saddle Points, and Global Optimization Landscape of Nonconvex Matrix Factorization

no code implementations29 Dec 2016 Xingguo Li, Junwei Lu, Raman Arora, Jarvis Haupt, Han Liu, Zhaoran Wang, Tuo Zhao

We propose a general theory for studying the \xl{landscape} of nonconvex \xl{optimization} with underlying symmetric structures \tz{for a class of machine learning problems (e. g., low-rank matrix factorization, phase retrieval, and deep linear neural networks)}.

Agnostic Estimation for Misspecified Phase Retrieval Models

no code implementations NeurIPS 2016 Matey Neykov, Zhaoran Wang, Han Liu

The goal of noisy high-dimensional phase retrieval is to estimate an $s$-sparse parameter $\boldsymbol{\beta}^*\in \mathbb{R}^d$ from $n$ realizations of the model $Y = (\boldsymbol{X}^{\top} \boldsymbol{\beta}^*)^2 + \varepsilon$.

Blind Attacks on Machine Learners

no code implementations NeurIPS 2016 Alex Beatson, Zhaoran Wang, Han Liu

We study the potential of a “blind attacker” to provably limit a learner’s performance by data injection attack without observing the learner’s training set or any parameter of the distribution from which it is drawn.

Tensor Graphical Model: Non-convex Optimization and Statistical Inference

no code implementations15 Sep 2016 Xiang Lyu, Will Wei Sun, Zhaoran Wang, Han Liu, Jian Yang, Guang Cheng

We consider the estimation and inference of graphical models that characterize the dependency structure of high-dimensional tensor-valued data.

NESTT: A Nonconvex Primal-Dual Splitting Method for Distributed and Stochastic Optimization

no code implementations NeurIPS 2016 Davood Hajinezhad, Mingyi Hong, Tuo Zhao, Zhaoran Wang

We study a stochastic and distributed algorithm for nonconvex problems whose objective consists of a sum of $N$ nonconvex $L_i/N$-smooth functions, plus a nonsmooth regularizer.

Stochastic Optimization

Sparse Generalized Eigenvalue Problem: Optimal Statistical Rates via Truncated Rayleigh Flow

no code implementations29 Apr 2016 Kean Ming Tan, Zhaoran Wang, Han Liu, Tong Zhang

Sparse generalized eigenvalue problem (GEP) plays a pivotal role in a large family of high-dimensional statistical models, including sparse Fisher's discriminant analysis, canonical correlation analysis, and sufficient dimension reduction.

Dimensionality Reduction

Sharp Computational-Statistical Phase Transitions via Oracle Computational Model

no code implementations30 Dec 2015 Zhaoran Wang, Quanquan Gu, Han Liu

Based upon an oracle model of computation, which captures the interactions between algorithms and data, we establish a general lower bound that explicitly connects the minimum testing risk under computational budget constraints with the intrinsic probabilistic and combinatorial structures of statistical problems.

Two-sample testing

Non-convex Statistical Optimization for Sparse Tensor Graphical Model

no code implementations NeurIPS 2015 Wei Sun, Zhaoran Wang, Han Liu, Guang Cheng

We consider the estimation of sparse graphical models that characterize the dependency structure of high-dimensional tensor-valued data.

High Dimensional EM Algorithm: Statistical Optimization and Asymptotic Normality

no code implementations NeurIPS 2015 Zhaoran Wang, Quanquan Gu, Yang Ning, Han Liu

We provide a general theory of the expectation-maximization (EM) algorithm for inferring high dimensional latent variable models.

Sparse Nonlinear Regression: Parameter Estimation and Asymptotic Inference

no code implementations14 Nov 2015 Zhuoran Yang, Zhaoran Wang, Han Liu, Yonina C. Eldar, Tong Zhang

To recover $\beta^*$, we propose an $\ell_1$-regularized least-squares estimator.

Optimal linear estimation under unknown nonlinear transform

no code implementations NeurIPS 2015 Xinyang Yi, Zhaoran Wang, Constantine Caramanis, Han Liu

This model is known as the single-index model in statistics, and, among other things, it represents a significant generalization of one-bit compressed sensing.

Statistical Limits of Convex Relaxations

no code implementations4 Mar 2015 Zhaoran Wang, Quanquan Gu, Han Liu

Many high dimensional sparse learning problems are formulated as nonconvex optimization.

Sparse Learning Stochastic Block Model

High Dimensional Expectation-Maximization Algorithm: Statistical Optimization and Asymptotic Normality

no code implementations30 Dec 2014 Zhaoran Wang, Quanquan Gu, Yang Ning, Han Liu

We provide a general theory of the expectation-maximization (EM) algorithm for inferring high dimensional latent variable models.

Tighten after Relax: Minimax-Optimal Sparse PCA in Polynomial Time

no code implementations NeurIPS 2014 Zhaoran Wang, Huanran Lu, Han Liu

In this paper, we propose a two-stage sparse PCA procedure that attains the optimal principal subspace estimator in polynomial time.

Sparse PCA with Oracle Property

no code implementations NeurIPS 2014 Quanquan Gu, Zhaoran Wang, Han Liu

In particular, under a weak assumption on the magnitude of the population projection matrix, one estimator within this family exactly recovers the true support with high probability, has exact rank-$k$, and attains a $\sqrt{s/n}$ statistical rate of convergence with $s$ being the subspace sparsity level and $n$ the sample size.

Nonconvex Statistical Optimization: Minimax-Optimal Sparse PCA in Polynomial Time

no code implementations22 Aug 2014 Zhaoran Wang, Huanran Lu, Han Liu

To optimally estimate sparse principal subspaces, we propose a two-stage computational framework named "tighten after relax": Within the 'relax' stage, we approximately solve a convex relaxation of sparse PCA with early stopping to obtain a desired initial estimator; For the 'tighten' stage, we propose a novel algorithm called sparse orthogonal iteration pursuit (SOAP), which iteratively refines the initial estimator by directly solving the underlying nonconvex problem.

Sparse Principal Component Analysis for High Dimensional Vector Autoregressive Models

no code implementations30 Jun 2013 Zhaoran Wang, Fang Han, Han Liu

We study sparse principal component analysis for high dimensional vector autoregressive time series under a doubly asymptotic framework, which allows the dimension $d$ to scale with the series length $T$.

Time Series

Optimal computational and statistical rates of convergence for sparse nonconvex learning problems

no code implementations20 Jun 2013 Zhaoran Wang, Han Liu, Tong Zhang

In particular, our analysis improves upon existing results by providing a more refined sample complexity bound as well as an exact support recovery result for the final estimator.

Cannot find the paper you are looking for? You can Submit a new open access paper.