Search Results for author: Ben Eisner

Found 9 papers, 2 papers with code

Deep SE(3)-Equivariant Geometric Reasoning for Precise Placement Tasks

no code implementations20 Apr 2024 Ben Eisner, Yi Yang, Todor Davchev, Mel Vecerik, Jonathan Scholz, David Held

In this work, we propose a method for precise relative pose prediction which is provably SE(3)-equivariant, can be learned from only a few demonstrations, and can generalize across variations in a class of objects.

On Time-Indexing as Inductive Bias in Deep RL for Sequential Manipulation Tasks

no code implementations3 Jan 2024 M. Nomaan Qureshi, Ben Eisner, David Held

In this paper we explore a simple structure which is conducive to skill learning required for so many of the manipulation tasks.

Inductive Bias

FlowBot3D: Learning 3D Articulation Flow to Manipulate Articulated Objects

no code implementations9 May 2022 Ben Eisner, Harry Zhang, David Held

We propose a vision-based system that learns to predict the potential motions of the parts of a variety of articulated objects to guide downstream motion planning of the system to articulate the objects.

Motion Planning

Self-supervised Transparent Liquid Segmentation for Robotic Pouring

1 code implementation3 Mar 2022 Gautham Narayan Narasimhan, Kai Zhang, Ben Eisner, Xingyu Lin, David Held

Liquid state estimation is important for robotics tasks such as pouring; however, estimating the state of transparent liquids is a challenging problem.

Segmentation

Robotic Grasping through Combined image-Based Grasp Proposal and 3D Reconstruction

no code implementations3 Mar 2020 Tarik Tosun, Daniel Yang, Ben Eisner, Volkan Isler, Daniel Lee

We present a novel approach to robotic grasp planning using both a learned grasp proposal network and a learned 3D shape reconstruction network.

Robotics

QXplore: Q-Learning Exploration by Maximizing Temporal Difference Error

no code implementations25 Sep 2019 Riley Simmons-Edler, Ben Eisner, Daniel Yang, Anthony Bisulco, Eric Mitchell, Sebastian Seung, Daniel Lee

We implement the objective with an adversarial Q-learning method in which Q and Qx are the action-value functions for extrinsic and secondary rewards, respectively.

Continuous Control Q-Learning +2

Reward Prediction Error as an Exploration Objective in Deep RL

no code implementations19 Jun 2019 Riley Simmons-Edler, Ben Eisner, Daniel Yang, Anthony Bisulco, Eric Mitchell, Sebastian Seung, Daniel Lee

We then propose a deep reinforcement learning method, QXplore, which exploits the temporal difference error of a Q-function to solve hard exploration tasks in high-dimensional MDPs.

Atari Games Continuous Control +4

Q-Learning for Continuous Actions with Cross-Entropy Guided Policies

no code implementations25 Mar 2019 Riley Simmons-Edler, Ben Eisner, Eric Mitchell, Sebastian Seung, Daniel Lee

CGP aims to combine the stability and performance of iterative sampling policies with the low computational cost of a policy network.

Q-Learning Reinforcement Learning (RL)

emoji2vec: Learning Emoji Representations from their Description

7 code implementations WS 2016 Ben Eisner, Tim Rocktäschel, Isabelle Augenstein, Matko Bošnjak, Sebastian Riedel

Many current natural language processing applications for social media rely on representation learning and utilize pre-trained word embeddings.

Representation Learning Sentiment Analysis +1

Cannot find the paper you are looking for? You can Submit a new open access paper.