Search Results for author: Botao Hao

Found 21 papers, 3 papers with code

Regret Bounds for Information-Directed Reinforcement Learning

no code implementations9 Jun 2022 Botao Hao, Tor Lattimore

Information-directed sampling (IDS) has revealed its potential as a data-efficient algorithm for reinforcement learning (RL).


Contextual Information-Directed Sampling

no code implementations22 May 2022 Botao Hao, Tor Lattimore, Chao Qin

Information-directed sampling (IDS) has recently demonstrated its potential as a data-efficient reinforcement learning algorithm.

Multi-Armed Bandits reinforcement-learning

Interacting Contour Stochastic Gradient Langevin Dynamics

1 code implementation ICLR 2022 Wei Deng, Siqi Liang, Botao Hao, Guang Lin, Faming Liang

We propose an interacting contour stochastic gradient Langevin dynamics (ICSGLD) sampler, an embarrassingly parallel multiple-chain contour stochastic gradient Langevin dynamics (CSGLD) sampler with efficient interactions.

Evaluating Predictive Distributions: Does Bayesian Deep Learning Work?

no code implementations29 Sep 2021 Ian Osband, Zheng Wen, Seyed Mohammad Asghari, Xiuyuan Lu, Morteza Ibrahimi, Vikranth Dwaracherla, Dieterich Lawson, Brendan O'Donoghue, Botao Hao, Benjamin Van Roy

This paper introduces \textit{The Neural Testbed}, which provides tools for the systematic evaluation of agents that generate such predictions.

Efficient Local Planning with Linear Function Approximation

no code implementations12 Aug 2021 Dong Yin, Botao Hao, Yasin Abbasi-Yadkori, Nevena Lazić, Csaba Szepesvári

Under the assumption that the Q-functions of all policies are linear in known features of the state-action pairs, we show that our algorithms have polynomial query and computational costs in the dimension of the features, the effective planning horizon, and the targeted sub-optimality, while these costs are independent of the size of the state space.

Bandit Phase Retrieval

no code implementations NeurIPS 2021 Tor Lattimore, Botao Hao

We study a bandit version of phase retrieval where the learner chooses actions $(A_t)_{t=1}^n$ in the $d$-dimensional unit ball and the expected reward is $\langle A_t, \theta_\star\rangle^2$ where $\theta_\star \in \mathbb R^d$ is an unknown parameter vector.

Information Directed Sampling for Sparse Linear Bandits

no code implementations NeurIPS 2021 Botao Hao, Tor Lattimore, Wei Deng

Stochastic sparse linear bandits offer a practical model for high-dimensional online decision-making problems and have a rich information-regret structure.

Decision Making

Optimization Issues in KL-Constrained Approximate Policy Iteration

no code implementations11 Feb 2021 Nevena Lazić, Botao Hao, Yasin Abbasi-Yadkori, Dale Schuurmans, Csaba Szepesvári

We compare the use of KL divergence as a constraint vs. as a regularizer, and point out several optimization issues with the widely-used constrained approach.

Bootstrapping Fitted Q-Evaluation for Off-Policy Inference

no code implementations6 Feb 2021 Botao Hao, Xiang Ji, Yaqi Duan, Hao Lu, Csaba Szepesvári, Mengdi Wang

Bootstrapping provides a flexible and effective approach for assessing the quality of batch reinforcement learning, yet its theoretical property is less understood.


Online Sparse Reinforcement Learning

no code implementations8 Nov 2020 Botao Hao, Tor Lattimore, Csaba Szepesvári, Mengdi Wang

First, we provide a lower bound showing that linear regret is generally unavoidable in this case, even if there exists a policy that collects well-conditioned data.


Sparse Feature Selection Makes Batch Reinforcement Learning More Sample Efficient

no code implementations8 Nov 2020 Botao Hao, Yaqi Duan, Tor Lattimore, Csaba Szepesvári, Mengdi Wang

To evaluate a new target policy, we analyze a Lasso fitted Q-evaluation method and establish a finite-sample error bound that has no polynomial dependence on the ambient dimension.

feature selection Model Selection +1

High-Dimensional Sparse Linear Bandits

no code implementations NeurIPS 2020 Botao Hao, Tor Lattimore, Mengdi Wang

Stochastic linear bandits with high-dimensional sparse features are a practical model for a variety of domains, including personalized medicine and online advertising.

Residual Bootstrap Exploration for Bandit Algorithms

no code implementations19 Feb 2020 Chi-Hua Wang, Yang Yu, Botao Hao, Guang Cheng

In this paper, we propose a novel perturbation-based exploration method in bandit algorithms with bounded or unbounded rewards, called residual bootstrap exploration (\texttt{ReBoot}).

Multi-Armed Bandits

Adaptive Approximate Policy Iteration

1 code implementation8 Feb 2020 Botao Hao, Nevena Lazic, Yasin Abbasi-Yadkori, Pooria Joulani, Csaba Szepesvari

This is an improvement over the best existing bound of $\tilde{O}(T^{3/4})$ for the average-reward case with function approximation.

online learning

Adaptive Exploration in Linear Contextual Bandit

no code implementations15 Oct 2019 Botao Hao, Tor Lattimore, Csaba Szepesvari

Contextual bandits serve as a fundamental model for many sequential decision making tasks.

Decision Making Multi-Armed Bandits

Bootstrapping Upper Confidence Bound

no code implementations NeurIPS 2019 Botao Hao, Yasin Abbasi-Yadkori, Zheng Wen, Guang Cheng

Upper Confidence Bound (UCB) method is arguably the most celebrated one used in online decision making with partial information feedback.

Decision Making Multi-Armed Bandits

Sparse Tensor Additive Regression

no code implementations31 Mar 2019 Botao Hao, Boxiang Wang, Pengyuan Wang, Jingfei Zhang, Jian Yang, Will Wei Sun

Tensors are becoming prevalent in modern applications such as medical imaging and digital marketing.

Click-Through Rate Prediction

Sparse and Low-rank Tensor Estimation via Cubic Sketchings

no code implementations29 Jan 2018 Botao Hao, Anru Zhang, Guang Cheng

In this paper, we propose a general framework for sparse and low-rank tensor estimation from cubic sketchings.

Tensor Decomposition

Simultaneous Clustering and Estimation of Heterogeneous Graphical Models

no code implementations28 Nov 2016 Botao Hao, Will Wei Sun, Yufeng Liu, Guang Cheng

We consider joint estimation of multiple graphical models arising from heterogeneous and high-dimensional observations.

Sparse Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.