Search Results for author: Chao Tian

Found 13 papers, 1 papers with code

Exactly Tight Information-Theoretic Generalization Error Bound for the Quadratic Gaussian Problem

no code implementations1 May 2023 Ruida Zhou, Chao Tian, Tie Liu

We further show that although the conditional bounding and the reference distribution can make the bound exactly tight, removing them does not significantly degrade the bound, which leads to a mutual-information-based bound that is also asymptotically tight in this setting.

Optimization of Cryptocurrency Miners' Participation in Ancillary Service Markets

no code implementations13 Mar 2023 Ali Menati, Yuting Cai, Rayan El Helou, Chao Tian, Le Xie

One of the most significant bottlenecks for the scalable deployment of such computation is its energy demand.

Anchor-Changing Regularized Natural Policy Gradient for Multi-Objective Reinforcement Learning

1 code implementation10 Jun 2022 Ruida Zhou, Tao Liu, Dileep Kalathil, P. R. Kumar, Chao Tian

We study policy optimization for Markov decision processes (MDPs) with multiple reward value functions, which are to be jointly optimized according to given criteria such as proportional fairness (smooth concave scalarization), hard constraints (constrained MDP), and max-min trade-off.

Fairness Multi-Objective Reinforcement Learning +1

On Top-$k$ Selection from $m$-wise Partial Rankings via Borda Counting

no code implementations11 Apr 2022 Wenjing Chen, Ruida Zhou, Chao Tian, Cong Shen

In the special case of $m=2$, i. e., pairwise comparison, the resultant bound is tighter than that given by Shah et al., leading to a reduced gap between the error probability upper and lower bounds.

Approximate Top-$m$ Arm Identification with Heterogeneous Reward Variances

no code implementations11 Apr 2022 Ruida Zhou, Chao Tian

We study the effect of reward variance heterogeneity in the approximate top-$m$ arm identification setting.

Policy Optimization for Constrained MDPs with Provable Fast Global Convergence

no code implementations31 Oct 2021 Tao Liu, Ruida Zhou, Dileep Kalathil, P. R. Kumar, Chao Tian

We propose a new algorithm called policy mirror descent-primal dual (PMD-PD) algorithm that can provably achieve a faster $\mathcal{O}(\log(T)/T)$ convergence rate for both the optimality gap and the constraint violation.

A Fast PC Algorithm with Reversed-order Pruning and A Parallelization Strategy

no code implementations10 Sep 2021 Kai Zhang, Chao Tian, Kun Zhang, Todd Johnson, Xiaoqian Jiang

The PC algorithm is the state-of-the-art algorithm for causal structure discovery on observational data.

Learning Policies with Zero or Bounded Constraint Violation for Constrained MDPs

no code implementations NeurIPS 2021 Tao Liu, Ruida Zhou, Dileep Kalathil, P. R. Kumar, Chao Tian

We show that when a strictly safe policy is known, then one can confine the system to zero constraint violation with arbitrarily high probability while keeping the reward regret of order $\tilde{\mathcal{O}}(\sqrt{K})$.

Safe Exploration

Individually Conditional Individual Mutual Information Bound on Generalization Error

no code implementations17 Dec 2020 Ruida Zhou, Chao Tian, Tie Liu

We propose a new information-theoretic bound on generalization error based on a combination of the error decomposition technique of Bu et al. and the conditional mutual information (CMI) construction of Steinke and Zakynthinou.

LEMMA

Train Once, and Decode As You Like

no code implementations COLING 2020 Chao Tian, Yifei Wang, Hao Cheng, Yijiang Lian, Zhihua Zhang

In this paper we propose a unified approach for supporting different generation manners of machine translation, including autoregressive, semi-autoregressive, and refinement-based non-autoregressive models.

Machine Translation Translation

On the Information Leakage in Private Information Retrieval Systems

no code implementations25 Sep 2019 Tao Guo, Ruida Zhou, Chao Tian

We further characterize the optimal tradeoff between the minimum amount of common randomness and the total leakage.

Information Retrieval Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.