Search Results for author: Riley Simmons-Edler

Found 6 papers, 0 papers with code

AuraSense: Robot Collision Avoidance by Full Surface Proximity Detection

no code implementations10 Aug 2021 Xiaoran Fan, Riley Simmons-Edler, Daewon Lee, Larry Jackel, Richard Howard, Daniel Lee

In this paper, we introduce the phenomenon of the Leaky Surface Wave (LSW), a novel sensing modality, and present AuraSense, a proximity detection system using the LSW.

Collision Avoidance

Towards Practical Credit Assignment for Deep Reinforcement Learning

no code implementations8 Jun 2021 Vyacheslav Alipov, Riley Simmons-Edler, Nikita Putintsev, Pavel Kalinin, Dmitry Vetrov

Credit assignment is a fundamental problem in reinforcement learning, the problem of measuring an action's influence on future rewards.

Atari Games reinforcement-learning +1

QXplore: Q-Learning Exploration by Maximizing Temporal Difference Error

no code implementations25 Sep 2019 Riley Simmons-Edler, Ben Eisner, Daniel Yang, Anthony Bisulco, Eric Mitchell, Sebastian Seung, Daniel Lee

We implement the objective with an adversarial Q-learning method in which Q and Qx are the action-value functions for extrinsic and secondary rewards, respectively.

Continuous Control Q-Learning +2

Reward Prediction Error as an Exploration Objective in Deep RL

no code implementations19 Jun 2019 Riley Simmons-Edler, Ben Eisner, Daniel Yang, Anthony Bisulco, Eric Mitchell, Sebastian Seung, Daniel Lee

We then propose a deep reinforcement learning method, QXplore, which exploits the temporal difference error of a Q-function to solve hard exploration tasks in high-dimensional MDPs.

Atari Games Continuous Control +4

Q-Learning for Continuous Actions with Cross-Entropy Guided Policies

no code implementations25 Mar 2019 Riley Simmons-Edler, Ben Eisner, Eric Mitchell, Sebastian Seung, Daniel Lee

CGP aims to combine the stability and performance of iterative sampling policies with the low computational cost of a policy network.

Q-Learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.