no code implementations • 20 Apr 2024 • Ben Eisner, Yi Yang, Todor Davchev, Mel Vecerik, Jonathan Scholz, David Held
In this work, we propose a method for precise relative pose prediction which is provably SE(3)-equivariant, can be learned from only a few demonstrations, and can generalize across variations in a class of objects.
no code implementations • 3 Jan 2024 • M. Nomaan Qureshi, Ben Eisner, David Held
In this paper we explore a simple structure which is conducive to skill learning required for so many of the manipulation tasks.
no code implementations • 17 Nov 2022 • Chuer Pan, Brian Okorn, Harry Zhang, Ben Eisner, David Held
We conjecture that this relationship is a generalizable notion of a manipulation task that can transfer to new objects in the same category; examples include the relationship between the pose of a pan relative to an oven or the pose of a mug relative to a mug rack.
no code implementations • 9 May 2022 • Ben Eisner, Harry Zhang, David Held
We propose a vision-based system that learns to predict the potential motions of the parts of a variety of articulated objects to guide downstream motion planning of the system to articulate the objects.
1 code implementation • 3 Mar 2022 • Gautham Narayan Narasimhan, Kai Zhang, Ben Eisner, Xingyu Lin, David Held
Liquid state estimation is important for robotics tasks such as pouring; however, estimating the state of transparent liquids is a challenging problem.
no code implementations • 3 Mar 2020 • Tarik Tosun, Daniel Yang, Ben Eisner, Volkan Isler, Daniel Lee
We present a novel approach to robotic grasp planning using both a learned grasp proposal network and a learned 3D shape reconstruction network.
Robotics
no code implementations • 25 Sep 2019 • Riley Simmons-Edler, Ben Eisner, Daniel Yang, Anthony Bisulco, Eric Mitchell, Sebastian Seung, Daniel Lee
We implement the objective with an adversarial Q-learning method in which Q and Qx are the action-value functions for extrinsic and secondary rewards, respectively.
no code implementations • 19 Jun 2019 • Riley Simmons-Edler, Ben Eisner, Daniel Yang, Anthony Bisulco, Eric Mitchell, Sebastian Seung, Daniel Lee
We then propose a deep reinforcement learning method, QXplore, which exploits the temporal difference error of a Q-function to solve hard exploration tasks in high-dimensional MDPs.
no code implementations • 25 Mar 2019 • Riley Simmons-Edler, Ben Eisner, Eric Mitchell, Sebastian Seung, Daniel Lee
CGP aims to combine the stability and performance of iterative sampling policies with the low computational cost of a policy network.
7 code implementations • WS 2016 • Ben Eisner, Tim Rocktäschel, Isabelle Augenstein, Matko Bošnjak, Sebastian Riedel
Many current natural language processing applications for social media rely on representation learning and utilize pre-trained word embeddings.