Search Results for author: Shaofeng Zou

Found 31 papers, 3 papers with code

Quickest Change Detection in Autoregressive Models

no code implementations13 Oct 2023 Zhongchang Sun, Shaofeng Zou

The data-driven setting where the disturbance signal parameters are unknown is further investigated, and an online and computationally efficient gradient ascent CuSum algorithm is designed.

Change Detection

Robust Multi-Agent Reinforcement Learning with State Uncertainty

1 code implementation30 Jul 2023 Sihong He, Songyang Han, Sanbao Su, Shuo Han, Shaofeng Zou, Fei Miao

Then we propose a robust multi-agent Q-learning (RMAQ) algorithm to find such an equilibrium, with convergence guarantees.

Multi-agent Reinforcement Learning Q-Learning +1

Achieving Minimax Optimal Sample Complexity of Offline Reinforcement Learning: A DRO-Based Approach

no code implementations22 May 2023 Yue Wang, JinJun Xiong, Shaofeng Zou

We show that an improved sample complexity of $\mathcal{O}(SC^{\pi^*}\epsilon^{-2}(1-\gamma)^{-3})$ can be obtained, which matches with the minimax lower bound for offline reinforcement learning, and thus is minimax optimal.


Model-Free Robust Average-Reward Reinforcement Learning

no code implementations17 May 2023 Yue Wang, Alvaro Velasquez, George Atia, Ashley Prater-Bennette, Shaofeng Zou

Robust Markov decision processes (MDPs) address the challenge of model uncertainty by optimizing the worst-case performance over an uncertainty set of MDPs.

Q-Learning reinforcement-learning

Robust Average-Reward Markov Decision Processes

no code implementations2 Jan 2023 Yue Wang, Alvaro Velasquez, George Atia, Ashley Prater-Bennette, Shaofeng Zou

We derive the robust Bellman equation for robust average-reward MDPs, prove that the optimal policy can be derived from its solution, and further design a robust relative value iteration algorithm that provably finds its solution, or equivalently, the optimal robust policy.

A Robust and Constrained Multi-Agent Reinforcement Learning Electric Vehicle Rebalancing Method in AMoD Systems

no code implementations17 Sep 2022 Sihong He, Yue Wang, Shuo Han, Shaofeng Zou, Fei Miao

In this work, we design a robust and constrained multi-agent reinforcement learning (MARL) framework with state transition kernel uncertainty for EV AMoD systems.

Fairness Multi-agent Reinforcement Learning +1

Robust Constrained Reinforcement Learning

no code implementations14 Sep 2022 Yue Wang, Fei Miao, Shaofeng Zou

We then investigate a concrete example of $\delta$-contamination uncertainty set, design an online and model-free algorithm and theoretically characterize its sample complexity.

Adversarial Attack reinforcement-learning +1

Finite-Time Error Bounds for Greedy-GQ

no code implementations6 Sep 2022 Yue Wang, Yi Zhou, Shaofeng Zou

Our techniques in this paper provide a general approach for finite-sample analysis of non-convex two timescale value-based reinforcement learning algorithms.

reinforcement-learning Reinforcement Learning (RL)

Quickest Anomaly Detection in Sensor Networks With Unlabeled Samples

no code implementations4 Sep 2022 Zhongchang Sun, Shaofeng Zou

The goal of the fusion center is to detect the anomaly with minimal detection delay subject to false alarm constraints.

Anomaly Detection

Provably Efficient Offline Reinforcement Learning with Trajectory-Wise Reward

no code implementations13 Jun 2022 Tengyu Xu, Yue Wang, Shaofeng Zou, Yingbin Liang

The remarkable success of reinforcement learning (RL) heavily relies on observing the reward of every visited state-action pair.

Offline RL reinforcement-learning +1

Policy Gradient Method For Robust Reinforcement Learning

no code implementations15 May 2022 Yue Wang, Shaofeng Zou

We further develop a smoothed robust policy gradient method and show that to achieve an $\epsilon$-global optimum, the complexity is $\mathcal O(\epsilon^{-3})$.

reinforcement-learning Reinforcement Learning (RL)

Kernel Robust Hypothesis Testing

no code implementations23 Mar 2022 Zhongchang Sun, Shaofeng Zou

For the Bayesian setting where the goal is to minimize the worst-case error probability, an optimal test is firstly obtained when the alphabet is finite.


Quickest Change Detection in Anonymous Heterogeneous Sensor Networks

no code implementations26 Feb 2022 Zhongchang Sun, Shaofeng Zou, Ruizhi Zhang, Qunwei Li

The problem of quickest change detection (QCD) in anonymous heterogeneous sensor networks is studied.

Change Detection

Faster Algorithm and Sharper Analysis for Constrained Markov Decision Process

no code implementations20 Oct 2021 Tianjiao Li, Ziwei Guan, Shaofeng Zou, Tengyu Xu, Yingbin Liang, Guanghui Lan

Despite the challenge of the nonconcave objective subject to nonconcave constraints, the proposed approach is shown to converge to the global optimum with a complexity of $\tilde{\mathcal O}(1/\epsilon)$ in terms of the optimality gap and the constraint violation, which improves the complexity of the existing primal-dual approach by a factor of $\mathcal O(1/\epsilon)$ \citep{ding2020natural, paternain2019constrained}.

Online Robust Reinforcement Learning with Model Uncertainty

no code implementations NeurIPS 2021 Yue Wang, Shaofeng Zou

In this paper, we focus on model-free robust RL, where the uncertainty set is defined to be centering at a misspecified MDP that generates a single sample trajectory sequentially and is assumed to be unknown.

Q-Learning reinforcement-learning +1

Sample and Communication-Efficient Decentralized Actor-Critic Algorithms with Finite-Time Analysis

no code implementations8 Sep 2021 Ziyi Chen, Yi Zhou, Rongrong Chen, Shaofeng Zou

Actor-critic (AC) algorithms have been widely adopted in decentralized multi-agent systems to learn the optimal joint control policy.

Non-Asymptotic Analysis for Two Time-scale TDC with General Smooth Function Approximation

no code implementations NeurIPS 2021 Yue Wang, Shaofeng Zou, Yi Zhou

Temporal-difference learning with gradient correction (TDC) is a two time-scale algorithm for policy evaluation in reinforcement learning.

reinforcement-learning Reinforcement Learning (RL)

Learning Graph Neural Networks with Approximate Gradient Descent

no code implementations7 Dec 2020 Qunwei Li, Shaofeng Zou, Wenliang Zhong

Two types of GNNs are investigated, depending on whether labels are attached to nodes or graphs.

Variance-Reduced Off-Policy TDC Learning: Non-Asymptotic Convergence Analysis

no code implementations NeurIPS 2020 Shaocong Ma, Yi Zhou, Shaofeng Zou

In the Markovian setting, our algorithm achieves the state-of-the-art sample complexity $O(\epsilon^{-1} \log {\epsilon}^{-1})$ that is near-optimal.

Two Time-scale Off-Policy TD Learning: Non-asymptotic Analysis over Markovian Samples

no code implementations NeurIPS 2019 Tengyu Xu, Shaofeng Zou, Yingbin Liang

Gradient-based temporal difference (GTD) algorithms are widely used in off-policy learning scenarios.

Information-Theoretic Understanding of Population Risk Improvement with Model Compression

1 code implementation27 Jan 2019 Yuheng Bu, Weihao Gao, Shaofeng Zou, Venugopal V. Veeravalli

We show that model compression can improve the population risk of a pre-trained model, by studying the tradeoff between the decrease in the generalization error and the increase in the empirical risk with model compression.

Clustering Model Compression

Tightening Mutual Information Based Bounds on Generalization Error

no code implementations15 Jan 2019 Yuheng Bu, Shaofeng Zou, Venugopal V. Veeravalli

The bound is derived under more general conditions on the loss function than in existing studies; nevertheless, it provides a tighter characterization of the generalization error.

Linear-Complexity Exponentially-Consistent Tests for Universal Outlying Sequence Detection

no code implementations21 Jan 2017 Yuheng Bu, Shaofeng Zou, Venugopal V. Veeravalli

A sequence is considered as outlying if the observations therein are generated by a distribution different from those generating the observations in the majority of the sequences.


Nonparametric Detection of Geometric Structures over Networks

no code implementations5 Apr 2016 Shaofeng Zou, Yingbin Liang, H. Vincent Poor

Sufficient conditions on minimum and maximum sizes of candidate anomalous intervals are characterized in order to guarantee the proposed test to be consistent.


Nonparametric Detection of Anomalous Data Streams

no code implementations25 Apr 2014 Shaofeng Zou, Yingbin Liang, H. Vincent Poor, Xinghua Shi

samples drawn from a distribution p, whereas each anomalous sequence contains m i. i. d.

Test Two-sample testing

A Kernel-Based Nonparametric Test for Anomaly Detection over Line Networks

no code implementations1 Apr 2014 Shaofeng Zou, Yingbin Liang, H. Vincent Poor

If anomalous interval does not exist, then all nodes receive samples generated by p. It is assumed that the distributions p and q are arbitrary, and are unknown.

Anomaly Detection Test

Cannot find the paper you are looking for? You can Submit a new open access paper.