Search Results for author: Siddharth Aravindan

Found 5 papers, 1 papers with code

ExPoSe: Combining State-Based Exploration with Gradient-Based Online Search

1 code implementation3 Feb 2022 Dixant Mittal, Siddharth Aravindan, Wee Sun Lee

Depending upon the smoothness of the action-value function, one approach to overcoming this issue is through online learning, where information is interpolated among similar states; Policy Gradient Search provides a practical algorithm to achieve this.

Atari Games Decision Making

EVaDE : Event-Based Variational Thompson Sampling for Model-Based Reinforcement Learning

no code implementations29 Sep 2021 Siddharth Aravindan, Dixant Mittal, Wee Sun Lee

These layers rely on Gaussian dropouts and are inserted in between the layers of the deep neural network model to help facilitate variational Thompson sampling.

Atari Games Model-based Reinforcement Learning +3

State-Aware Variational Thompson Sampling for Deep Q-Networks

no code implementations7 Feb 2021 Siddharth Aravindan, Wee Sun Lee

We derive a variational Thompson sampling approximation for DQNs which uses a deep network whose parameters are perturbed by a learned variational noise distribution.

Thompson Sampling

An Analysis of Frame-skipping in Reinforcement Learning

no code implementations7 Feb 2021 Shivaram Kalyanakrishnan, Siddharth Aravindan, Vishwajeet Bagdawat, Varun Bhatt, Harshith Goka, Archit Gupta, Kalpesh Krishna, Vihari Piratla

In this paper, we investigate the role of the parameter $d$ in RL; $d$ is called the "frame-skip" parameter, since states in the Atari domain are images.

Decision Making reinforcement-learning +1

Learning to Prune Deep Neural Networks via Reinforcement Learning

no code implementations9 Jul 2020 Manas Gupta, Siddharth Aravindan, Aleksandra Kalisz, Vijay Chandrasekhar, Lin Jie

PuRL achieves more than 80% sparsity on the ResNet-50 model while retaining a Top-1 accuracy of 75. 37% on the ImageNet dataset.

Model Compression reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.