Search Results for author: Thanh Nguyen-Tang

Found 12 papers, 6 papers with code

Offline Multitask Representation Learning for Reinforcement Learning

no code implementations18 Mar 2024 Haque Ishfaq, Thanh Nguyen-Tang, Songtao Feng, Raman Arora, Mengdi Wang, Ming Yin, Doina Precup

We study offline multitask representation learning in reinforcement learning (RL), where a learner is provided with an offline dataset from different tasks that share a common representation and is asked to learn the shared representation.

reinforcement-learning Reinforcement Learning (RL) +1

On Sample-Efficient Offline Reinforcement Learning: Data Diversity, Posterior Sampling, and Beyond

no code implementations6 Jan 2024 Thanh Nguyen-Tang, Raman Arora

This result is surprising, given that the prior work suggested an unfavorable sample complexity of the RO-based algorithm compared to the VS-based algorithm, whereas posterior sampling is rarely considered in offline RL due to its explorative nature.

Decision Making Offline RL +1

SigFormer: Signature Transformers for Deep Hedging

1 code implementation20 Oct 2023 Anh Tong, Thanh Nguyen-Tang, Dongeun Lee, Toan Tran, Jaesik Choi

To mitigate such difficulties, we introduce SigFormer, a novel deep learning model that combines the power of path signatures and transformers to handle sequential data, particularly in cases with irregularities.

VIPeR: Provably Efficient Algorithm for Offline RL with Neural Function Approximation

1 code implementation24 Feb 2023 Thanh Nguyen-Tang, Raman Arora

We corroborate the statistical and computational efficiency of VIPeR with an empirical evaluation on a wide set of synthetic and real-world datasets.

Computational Efficiency Offline RL +3

TIPI: Test Time Adaptation With Transformation Invariance

1 code implementation CVPR 2023 A. Tuan Nguyen, Thanh Nguyen-Tang, Ser-Nam Lim, Philip H.S. Torr

Test Time Adaptation offers a means to combat this problem, as it allows the model to adapt during test time to the new data distribution, using only unlabeled test data batches.

Autonomous Driving Test-time Adaptation

On Instance-Dependent Bounds for Offline Reinforcement Learning with Linear Function Approximation

no code implementations23 Nov 2022 Thanh Nguyen-Tang, Ming Yin, Sunil Gupta, Svetha Venkatesh, Raman Arora

To the best of our knowledge, these are the first $\tilde{\mathcal{O}}(\frac{1}{K})$ bound and absolute zero sub-optimality bound respectively for offline RL with linear function approximation from adaptive data with partial coverage.

Offline RL reinforcement-learning +1

On Practical Reinforcement Learning: Provable Robustness, Scalability, and Statistical Efficiency

1 code implementation3 Mar 2022 Thanh Nguyen-Tang

This thesis rigorously studies fundamental reinforcement learning (RL) methods in modern practical considerations, including robust RL, distributional RL, and offline RL with neural function approximation.

Offline RL reinforcement-learning +1

Offline Neural Contextual Bandits: Pessimism, Optimization and Generalization

1 code implementation ICLR 2022 Thanh Nguyen-Tang, Sunil Gupta, A. Tuan Nguyen, Svetha Venkatesh

Moreover, we show that our method is more computationally efficient and has a better dependence on the effective dimension of the neural network than an online counterpart.

Multi-Armed Bandits

Sample Complexity of Offline Reinforcement Learning with Deep ReLU Networks

no code implementations11 Mar 2021 Thanh Nguyen-Tang, Sunil Gupta, Hung Tran-The, Svetha Venkatesh

To the best of our knowledge, this is the first theoretical characterization of the sample complexity of offline RL with deep neural network function approximation under the general Besov regularity condition that goes beyond {the linearity regime} in the traditional Reproducing Hilbert kernel spaces and Neural Tangent Kernels.

Offline RL reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.