no code implementations • 25 Mar 2024 • Zeyu Jia, Alexander Rakhlin, Ayush Sekhari, Chen-Yu Wei
We revisit the problem of offline reinforcement learning with value function realizability but without Bellman completeness.
no code implementations • 14 Nov 2022 • Zeyu Jia, Randy Jia, Dhruv Madeka, Dean P. Foster
We study the problem of Reinforcement Learning (RL) with linear function approximation, i. e. assuming the optimal action-value function is linear in a known $d$-dimensional feature mapping.
no code implementations • 8 Jun 2021 • Adam Block, Zeyu Jia, Yury Polyanskiy, Alexander Rakhlin
It has long been thought that high-dimensional data encountered in many practical machine learning tasks have low-dimensional structure, i. e., the manifold hypothesis holds.
no code implementations • ICML 2020 • Alex Ayoub, Zeyu Jia, Csaba Szepesvari, Mengdi Wang, Lin F. Yang
We propose a model based RL algorithm that is based on optimism principle: In each episode, the set of models that are `consistent' with the data collected is constructed.
no code implementations • 25 Sep 2019 • Zeyu Jia, Simon S. Du, Ruosong Wang, Mengdi Wang, Lin F. Yang
Modern complex sequential decision-making problem often both low-level policy and high-level planning.
no code implementations • 2 Jun 2019 • Zeyu Jia, Lin F. Yang, Mengdi Wang
Consider a two-player zero-sum stochastic game where the transition function can be embedded in a given feature space.