no code implementations • 19 Apr 2022 • Chuanhong Liu, Caili Guo, Yang Yang, Nan Jiang
To solve the problem, both compression ratio and resource allocation are optimized for the task-oriented communication system to maximize the success probability of tasks.
no code implementations • 25 Mar 2022 • Jinglin Chen, Nan Jiang
We consider a challenging theoretical problem in offline reinforcement learning (RL): obtaining sample-efficiency guarantees with a dataset lacking sufficient coverage, under only realizability-type assumptions for the function approximators.
no code implementations • 15 Mar 2022 • Ziyang Song, Dongliang Wang, Nan Jiang, Zhicheng Fang, Chenjing Ding, Weihao Gan, Wei Wu
Such a design combines the strong spatio-temporal representation capacity of Transformer, superiority in generative modeling of GAN, and inherent temporal correlations from latent prior.
no code implementations • 16 Feb 2022 • Zhu Wang, Honglong Chen, Zhe Li, Kai Lin, Nan Jiang, Feng Xia
Fortunately, context-aware recommender systems can alleviate the sparsity problem by making use of some auxiliary information, such as the information of both the users and items.
no code implementations • ICLR 2022 • Jiawei Huang, Jinglin Chen, Li Zhao, Tao Qin, Nan Jiang, Tie-Yan Liu
Deployment efficiency is an important criterion for many real-world applications of reinforcement learning (RL).
no code implementations • 9 Feb 2022 • Wenhao Zhan, Baihe Huang, Audrey Huang, Nan Jiang, Jason D. Lee
Sample-efficiency guarantees for offline reinforcement learning (RL) often rely on strong assumptions on both the function classes (e. g., Bellman-completeness) and the data coverage (e. g., all-policy concentrability).
1 code implementation • 5 Feb 2022 • Ching-An Cheng, Tengyang Xie, Nan Jiang, Alekh Agarwal
We propose Adversarially Trained Actor Critic (ATAC), a new model-free algorithm for offline reinforcement learning under insufficient data coverage, based on a two-player Stackelberg game framing of offline RL: A policy actor competes against an adversarially trained value critic, who finds data-consistent scenarios where the actor is inferior to the data-collection behavior policy.
no code implementations • 12 Nov 2021 • Chengchun Shi, Masatoshi Uehara, Jiawei Huang, Nan Jiang
In this work, we first propose novel identification methods for OPE in POMDPs with latent confounders, by introducing bridge functions that link the target policy's value and the observed data distribution.
1 code implementation • NeurIPS 2021 • Siyuan Zhang, Nan Jiang
How to select between policies and value functions produced by different training algorithms in offline reinforcement learning (RL) -- which is crucial for hyperpa-rameter tuning -- is an important open question.
no code implementations • 6 Oct 2021 • Nan Jiang, Chen Luo, Vihan Lakshman, Yesh Dattatreya, Yexiang Xue
In addition, FLAN does not require any annotated data or supervised learning.
no code implementations • 22 Sep 2021 • Yash Nair, Nan Jiang
We consider off-policy evaluation (OPE) in Partially Observable Markov Decision Processes, where the evaluation policy depends only on observable variables but the behavior policy depends on latent states (Tennenholtz et al. (2020a)).
no code implementations • NeurIPS 2021 • Tengyang Xie, Ching-An Cheng, Nan Jiang, Paul Mineiro, Alekh Agarwal
The use of pessimism, when reasoning about datasets lacking exhaustive exploration has recently gained prominence in offline reinforcement learning.
no code implementations • NeurIPS 2021 • Tengyang Xie, Nan Jiang, Huan Wang, Caiming Xiong, Yu Bai
This offline result is the first that matches the sample complexity lower bound in this setting, and resolves a recent open question in offline RL.
no code implementations • 2 Jun 2021 • Jiawei Huang, Nan Jiang
In this paper, we study the convergence properties of off-policy policy improvement algorithms with state-action density ratio correction under function approximation setting, where the objective function is formulated as a max-max-min optimization problem.
no code implementations • 2 Mar 2021 • Cameron Voloshin, Nan Jiang, Yisong Yue
We present a novel off-policy loss function for learning a transition model in model-based reinforcement learning.
1 code implementation • 26 Feb 2021 • Nan Jiang, Thibaud Lutellier, Lin Tan
Finally, CURE uses a subword tokenization technique to generate a smaller search space that contains more correct fixes.
no code implementations • 14 Feb 2021 • Aditya Modi, Jinglin Chen, Akshay Krishnamurthy, Nan Jiang, Alekh Agarwal
In this work, we present the first model-free representation learning algorithms for low rank MDPs.
no code implementations • 6 Feb 2021 • Nan Jiang, Xuehui Yu, Xiaoke Peng, Yuqi Gong, Zhenjun Han
Detecting tiny objects ( e. g., less than 20 x 20 pixels) in large-scale images is an important yet open problem.
no code implementations • 5 Feb 2021 • Masatoshi Uehara, Masaaki Imaizumi, Nan Jiang, Nathan Kallus, Wen Sun, Tengyang Xie
We offer a theoretical characterization of off-policy evaluation (OPE) in reinforcement learning using function approximation for marginal importance weights and $q$-functions when these are estimated using recent minimax methods.
no code implementations • 3 Feb 2021 • Gellért Weisz, Philip Amortila, Barnabás Janzer, Yasin Abbasi-Yadkori, Nan Jiang, Csaba Szepesvári
We consider local planning in fixed-horizon MDPs with a generative model under the assumption that the optimal value function lies close to the span of a feature map.
1 code implementation • 21 Jan 2021 • Nan Jiang, Kuiran Wang, Xiaoke Peng, Xuehui Yu, Qiang Wang, Junliang Xing, Guorong Li, Jian Zhao, Guodong Guo, Zhenjun Han
The releasing of such a large-scale dataset could be a useful initial step in research of tracking UAVs.
no code implementations • 21 Jan 2021 • Yunfei Pu, Sheng Zhang, Yukai Wu, Nan Jiang, Wei Chang, Chang Li, Luming Duan
The experimental realization of entanglement connection of two quantum repeater segments with an efficient memory-enhanced scaling demonstrates a key advantage of the quantum repeater protocol, which makes a cornerstone towards future large-scale quantum networks.
Quantum Physics
1 code implementation • 1 Jan 2021 • Jiawei Xue, Nan Jiang, Senwei Liang, Qiyuan Pang, Takahiro Yabe, Satish V. Ukkusuri, Jianzhu Ma
We apply the method to 11, 790 urban road networks across 30 cities worldwide to measure the spatial homogeneity of road networks within each city and across different cities.
1 code implementation • NeurIPS 2020 • Nan Jiang, Sheng Jin, Zhiyao Duan, ChangShui Zhang
An interaction reward model is trained on the duets formed from outer parts of Bach chorales to model counterpoint interaction, while a style reward model is trained on monophonic melodies of Chinese folk songs to model melodic patterns.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Maosen Zhang, Nan Jiang, Lei LI, Yexiang Xue
Generating natural language under complex constraints is a principled formulation towards controllable text generation.
no code implementations • 2 Nov 2020 • Philip Amortila, Nan Jiang, Tengyang Xie
Recently, Wang et al. (2020) showed a highly intriguing hardness result for batch reinforcement learning (RL) with linearly realizable value function and good feature coverage in the finite-horizon case.
no code implementations • 23 Oct 2020 • Priyank Agrawal, Jinglin Chen, Nan Jiang
This paper studies regret minimization with randomized value functions in reinforcement learning.
1 code implementation • 16 Sep 2020 • Xuehui Yu, Zhenjun Han, Yuqi Gong, Nan Jiang, Jian Zhao, Qixiang Ye, Jie Chen, Yuan Feng, Bin Zhang, Xiaodi Wang, Ying Xin, Jingwei Liu, Mingyuan Mao, Sheng Xu, Baochang Zhang, Shumin Han, Cheng Gao, Wei Tang, Lizuo Jin, Mingbo Hong, Yuchao Yang, Shuiwang Li, Huan Luo, Qijun Zhao, Humphrey Shi
The 1st Tiny Object Detection (TOD) Challenge aims to encourage research in developing novel and accurate methods for tiny object detection in images which have wide views, with a current focus on tiny person detection.
no code implementations • 14 Sep 2020 • Yan Liu, Yansha Deng, Nan Jiang, Maged Elkashlan, Arumugam Nallanathan
NarrowBand-Internet of Things (NB-IoT) is a new 3GPP radio access technology designed to provide better coverage for Low Power Wide Area (LPWA) networks.
1 code implementation • 11 Aug 2020 • Tengyang Xie, Nan Jiang
We make progress in a long-standing problem of batch reinforcement learning (RL): learning $Q^\star$ from an exploratory and polynomial-sized dataset, using a realizable and otherwise arbitrary function class.
no code implementations • WS 2020 • Xiuyu Wu, Nan Jiang, Yunfang Wu
The answer-agnostic question generation is a significant and challenging task, which aims to automatically generate questions for a given sentence but without an answer.
no code implementations • 9 Mar 2020 • Tengyang Xie, Nan Jiang
We prove performance guarantees of two algorithms for approximating $Q^\star$ in batch reinforcement learning.
no code implementations • 8 Feb 2020 • Nan Jiang, Sheng Jin, Zhiyao Duan, Chang-Shui Zhang
We cast this as a reinforcement learning problem, where the generation agent learns a policy to generate a musical note (action) based on previously generated context (state).
no code implementations • NeurIPS 2020 • Nan Jiang, Jiawei Huang
By slightly altering the derivation of previous methods (one from each style; Uehara et al., 2020), we unify them into a single value interval that comes with a special type of double robustness: when either the value-function or the importance-weight class is well specified, the interval is valid and its length quantifies the misspecification of the other class.
1 code implementation • 23 Dec 2019 • Xuehui Yu, Yuqi Gong, Nan Jiang, Qixiang Ye, Zhenjun Han
In this paper, we introduce a new benchmark, referred to as TinyPerson, opening up a promising directionfor tiny object detection in a long distance and with mas-sive backgrounds.
3 code implementations • 15 Nov 2019 • Cameron Voloshin, Hoang M. Le, Nan Jiang, Yisong Yue
We offer an experimental benchmark and empirical study for off-policy policy evaluation (OPE) in reinforcement learning, which is a key problem in many safety critical applications.
no code implementations • ICML 2020 • Masatoshi Uehara, Jiawei Huang, Nan Jiang
We provide theoretical investigations into off-policy evaluation in reinforcement learning using function approximators for (marginalized) importance weights and value functions.
no code implementations • 23 Oct 2019 • Aditya Modi, Nan Jiang, Ambuj Tewari, Satinder Singh
As an extension, we also consider the more challenging problem of model selection, where the state features are unknown and can be chosen from a large candidate set.
1 code implementation • ICML 2020 • Jiawei Huang, Nan Jiang
We show that on-policy policy gradient (PG) and its variance reduction variants can be derived by taking finite difference of function evaluations supplied by estimators from the importance sampling (IS) family for off-policy evaluation (OPE).
no code implementations • 5 Sep 2019 • Chang Li, Nan Jiang, Yukai Wu, Wei Chang, Yunfei Pu, Sheng Zhang, Lu-Ming Duan
The use of multiplexed atomic quantum memories (MAQM) can significantly enhance the efficiency to establish entanglement in a quantum network.
Quantum Physics
no code implementations • 30 May 2019 • Nan Jiang
When function approximation is deployed in reinforcement learning (RL), the same problem may be formulated in different ways, often by treating a pre-processing step as a part of the environment or as part of the agent.
no code implementations • NeurIPS 2019 • Yu Bai, Tengyang Xie, Nan Jiang, Yu-Xiang Wang
We take initial steps in studying PAC-MDP algorithms with limited adaptivity, that is, algorithms that change its exploration policy as infrequently as possible during regret minimization.
no code implementations • 1 May 2019 • Jinglin Chen, Nan Jiang
Value-function approximation methods that operate in batch mode have foundational importance to reinforcement learning (RL).
1 code implementation • 25 Jan 2019 • Simon S. Du, Akshay Krishnamurthy, Nan Jiang, Alekh Agarwal, Miroslav Dudík, John Langford
We study the exploration problem in episodic MDPs with rich observations generated from a small number of latent states.
no code implementations • NeurIPS 2018 • Nan Jiang, Alex Kulesza, Satinder Singh
A central problem in dynamical system modeling is state discovery—that is, finding a compact summary of the past that captures the information needed to predict the future.
no code implementations • 21 Nov 2018 • Wen Sun, Nan Jiang, Akshay Krishnamurthy, Alekh Agarwal, John Langford
We study the sample complexity of model-based reinforcement learning (henceforth RL) in general contextual decision processes that require strategic exploration to find a near-optimal policy.
no code implementations • ICLR 2019 • Bowen Wu, Nan Jiang, Zhifeng Gao, Mengyuan Li, Zongsheng Wang, Suke Li, Qihang Feng, Wenge Rong, Baoxun Wang
Recent advances in sequence-to-sequence learning reveal a purely data-driven approach to the response generation task.
no code implementations • NAACL 2018 • Zhen Xu, Nan Jiang, Bingquan Liu, Wenge Rong, Bowen Wu, Baoxun Wang, Zhuoran Wang, Xiaolong Wang
The experimental results have shown that our proposed corpus can be taken as a new benchmark dataset for the NRG task, and the presented metrics are promising to guide the optimization of NRG models by quantifying the diversity of the generated responses reasonably.
no code implementations • 16 May 2018 • Yijie Dang, Nan Jiang, Hao Hu, Zhuoxiao Ji, Wenyin Zhang
However, the usually used classification method --- the K Nearest-Neighbor algorithm has high complexity, because its two main processes: similarity computing and searching are time-consuming.
no code implementations • ICML 2018 • Hoang M. Le, Nan Jiang, Alekh Agarwal, Miroslav Dudík, Yisong Yue, Hal Daumé III
We study how to effectively leverage expert feedback to learn sequential decision-making policies.
no code implementations • NeurIPS 2018 • Christoph Dann, Nan Jiang, Akshay Krishnamurthy, Alekh Agarwal, John Langford, Robert E. Schapire
We study the computational tractability of PAC reinforcement learning with rich observations.
no code implementations • 15 Nov 2017 • Aditya Modi, Nan Jiang, Satinder Singh, Ambuj Tewari
Because our lower bound has an exponential dependence on the dimension, we consider a tractable linear setting where the context is used to create linear combinations of a finite set of MDPs.
no code implementations • NeurIPS 2017 • Kareem Amin, Nan Jiang, Satinder Singh
We introduce a novel repeated Inverse Reinforcement Learning problem: the agent has to act on behalf of a human in a sequence of tasks and wishes to minimize the number of tasks that it surprises the human by acting suboptimally with respect to how the human would have acted.
no code implementations • ICML 2017 • Nan Jiang, Akshay Krishnamurthy, Alekh Agarwal, John Langford, Robert E. Schapire
Our first contribution is a complexity measure, the Bellman rank, that we show enables tractable learning of near-optimal behavior in these processes and is naturally small for many well-studied reinforcement learning settings.
no code implementations • 1 Sep 2016 • Junqi Jin, Ziang Yan, Kun fu, Nan Jiang, Chang-Shui Zhang
Deep learning models' architectures, including depth and width, are key factors influencing models' performance, such as test accuracy and computation time.
no code implementations • 29 Aug 2016 • Junqi Jin, Ziang Yan, Kun fu, Nan Jiang, Chang-Shui Zhang
A greedy algorithm with bounds is suggested to solve the transformed problem.
1 code implementation • 8 Mar 2016 • Byeongkeun Kang, Kar-Han Tan, Nan Jiang, Hung-Shuo Tai, Daniel Tretter, Truong Q. Nguyen
Thus, we propose hand segmentation method for hand-object interaction using only a depth map.
no code implementations • 15 Nov 2015 • Yikang Shen, Wenge Rong, Nan Jiang, Baolin Peng, Jie Tang, Zhang Xiong
With the development of community based question answering (Q&A) services, a large scale of Q&A archives have been accumulated and are an important information and knowledge resource on the web.
2 code implementations • 11 Nov 2015 • Nan Jiang, Lihong Li
We study the problem of off-policy value evaluation in reinforcement learning (RL), where one aims to estimate the value of a new policy based on data collected by a different policy.
no code implementations • CVPR 2014 • Nan Jiang, Ying Wu
This paper presents a novel method to jointly determine the best spatial location and the optimal metric.