Search Results for author: Chris Paxton

Found 37 papers, 12 papers with code

Evaluating Continual Learning on a Home Robot

no code implementations4 Jun 2023 Sam Powers, Abhinav Gupta, Chris Paxton

Robots in home environments need to be able to learn new skills continuously as data becomes available, becoming ever more capable over time while using as little real-world data as possible.

Continual Learning

HACMan: Learning Hybrid Actor-Critic Maps for 6D Non-Prehensile Manipulation

no code implementations6 May 2023 Wenxuan Zhou, Bowen Jiang, Fan Yang, Chris Paxton, David Held

In this work, we introduce Hybrid Actor-Critic Maps for Manipulation (HACMan), a reinforcement learning approach for 6D non-prehensile manipulation of objects using point cloud observations.

USA-Net: Unified Semantic and Affordance Representations for Robot Memory

no code implementations24 Apr 2023 Benjamin Bolte, Austin Wang, Jimmy Yang, Mustafa Mukadam, Mrinal Kalakrishnan, Chris Paxton

In order for robots to follow open-ended instructions like "go open the brown cabinet over the sink", they require an understanding of both the scene geometry and the semantics of their environment.


StructDiffusion: Language-Guided Creation of Physically-Valid Structures using Unseen Objects

no code implementations8 Nov 2022 Weiyu Liu, Yilun Du, Tucker Hermans, Sonia Chernova, Chris Paxton

StructDiffusion even improves the success rate of assembling physically-valid structures out of unseen objects by on average 16% over an existing multi-modal transformer model trained on specific structures.


CLIP-Fields: Weakly Supervised Semantic Fields for Robotic Memory

2 code implementations11 Oct 2022 Nur Muhammad Mahi Shafiullah, Chris Paxton, Lerrel Pinto, Soumith Chintala, Arthur Szlam

We propose CLIP-Fields, an implicit scene model that can be used for a variety of tasks, such as segmentation, instance identification, semantic search over space, and view localization.

Segmentation Semantic Segmentation

Transporters with Visual Foresight for Solving Unseen Rearrangement Tasks

no code implementations22 Feb 2022 Hongtao Wu, Jikai Ye, Xin Meng, Chris Paxton, Gregory Chirikjian

We propose a visual foresight model for pick-and-place rearrangement manipulation which is able to learn efficiently.

Imitation Learning Multi-Task Learning

Pre-Trained Language Models for Interactive Decision-Making

1 code implementation3 Feb 2022 Shuang Li, Xavier Puig, Chris Paxton, Yilun Du, Clinton Wang, Linxi Fan, Tao Chen, De-An Huang, Ekin Akyürek, Anima Anandkumar, Jacob Andreas, Igor Mordatch, Antonio Torralba, Yuke Zhu

Together, these results suggest that language modeling induces representations that are useful for modeling not just language, but also goals and plans; these representations can aid learning and generalization even outside of language processing.

Imitation Learning Language Modelling

IFOR: Iterative Flow Minimization for Robotic Object Rearrangement

no code implementations CVPR 2022 Ankit Goyal, Arsalan Mousavian, Chris Paxton, Yu-Wei Chao, Brian Okorn, Jia Deng, Dieter Fox

Accurate object rearrangement from vision is a crucial problem for a wide variety of real-world robotics applications in unstructured environments.

Optical Flow Estimation

Learning Perceptual Concepts by Bootstrapping from Human Queries

no code implementations9 Nov 2021 Andreea Bobu, Chris Paxton, Wei Yang, Balakumar Sundaralingam, Yu-Wei Chao, Maya Cakmak, Dieter Fox

Second, we treat this low-dimensional concept as an automatic labeler to synthesize a large-scale high-dimensional data set with the simulator.

Motion Planning

StructFormer: Learning Spatial Structure for Language-Guided Semantic Rearrangement of Novel Objects

no code implementations19 Oct 2021 Weiyu Liu, Chris Paxton, Tucker Hermans, Dieter Fox

Geometric organization of objects into semantically meaningful arrangements pervades the built world.

SORNet: Spatial Object-Centric Representations for Sequential Manipulation

1 code implementation8 Sep 2021 Wentao Yuan, Chris Paxton, Karthik Desingh, Dieter Fox

Sequential manipulation tasks require a robot to perceive the state of an environment and plan a sequence of actions leading to a desired goal state.

Relation Classification Representation Learning

Predicting Stable Configurations for Semantic Placement of Novel Objects

1 code implementation26 Aug 2021 Chris Paxton, Chris Xie, Tucker Hermans, Dieter Fox

We further demonstrate the ability of our planner to generate and execute diverse manipulation plans through a set of real-world experiments with a variety of objects.

Motion Planning valid

Language Grounding with 3D Objects

2 code implementations26 Jul 2021 Jesse Thomason, Mohit Shridhar, Yonatan Bisk, Chris Paxton, Luke Zettlemoyer

We introduce several CLIP-based models for distinguishing objects and demonstrate that while recent advances in jointly modeling vision and language are useful for robotic language understanding, it is still the case that these image-based models are weaker at understanding the 3D nature of objects -- properties which play a key role in manipulation.

A Persistent Spatial Semantic Representation for High-level Natural Language Instruction Execution

1 code implementation12 Jul 2021 Valts Blukis, Chris Paxton, Dieter Fox, Animesh Garg, Yoav Artzi

Natural language provides an accessible and expressive interface to specify long-term tasks for robotic agents.

NeRP: Neural Rearrangement Planning for Unknown Objects

no code implementations2 Jun 2021 Ahmed H. Qureshi, Arsalan Mousavian, Chris Paxton, Michael C. Yip, Dieter Fox

We propose NeRP (Neural Rearrangement Planning), a deep learning based approach for multi-step neural object rearrangement planning which works with never-before-seen objects, that is trained on simulation data, and generalizes to the real world.

Reactive Long Horizon Task Execution via Visual Skill and Precondition Models

no code implementations17 Nov 2020 Shohin Mukherjee, Chris Paxton, Arsalan Mousavian, Adam Fishman, Maxim Likhachev, Dieter Fox

Zero-shot execution of unseen robotic tasks is important to allowing robots to perform a wide variety of tasks in human environments, but collecting the amounts of data necessary to train end-to-end policies in the real-world is often infeasible.

Reactive Human-to-Robot Handovers of Arbitrary Objects

no code implementations17 Nov 2020 Wei Yang, Chris Paxton, Arsalan Mousavian, Yu-Wei Chao, Maya Cakmak, Dieter Fox

We demonstrate the generalizability, usability, and robustness of our approach on a novel benchmark set of 26 diverse household objects, a user study with naive users (N=6) handing over a subset of 15 objects, and a systematic evaluation examining different ways of handing objects.

Grasp Generation Motion Planning

Human Grasp Classification for Reactive Human-to-Robot Handovers

no code implementations12 Mar 2020 Wei Yang, Chris Paxton, Maya Cakmak, Dieter Fox

In this paper, we propose an approach for human-to-robot handovers in which the robot meets the human halfway, by classifying the human's grasp of the object and quickly planning a trajectory accordingly to take the object from the human's hand according to their intent.

Classification General Classification

Transferable Task Execution from Pixels through Deep Planning Domain Learning

no code implementations8 Mar 2020 Kei Kase, Chris Paxton, Hammad Mazhar, Tetsuya OGATA, Dieter Fox

On the other hand, symbolic planning methods such as STRIPS have long been able to solve new problems given only a domain definition and a symbolic goal, but these approaches often struggle on the real world robotic tasks due to the challenges of grounding these symbols from sensor data in a partially-observable world.

Motion Reasoning for Goal-Based Imitation Learning

no code implementations13 Nov 2019 De-An Huang, Yu-Wei Chao, Chris Paxton, Xinke Deng, Li Fei-Fei, Juan Carlos Niebles, Animesh Garg, Dieter Fox

We further show that by using the automatically inferred goal from the video demonstration, our robot is able to reproduce the same task in a real kitchen environment.

Imitation Learning Motion Planning +1

Online Replanning in Belief Space for Partially Observable Task and Motion Problems

1 code implementation11 Nov 2019 Caelan Reed Garrett, Chris Paxton, Tomás Lozano-Pérez, Leslie Pack Kaelbling, Dieter Fox

To solve multi-step manipulation tasks in the real world, an autonomous robot must take actions to observe its environment and react to unexpected observations.

Continuous Control

Conditional Driving from Natural Language Instructions

no code implementations16 Oct 2019 Junha Roh, Chris Paxton, Andrzej Pronobis, Ali Farhadi, Dieter Fox

Widespread adoption of self-driving cars will depend not only on their safety but largely on their ability to interact with human users.

Imitation Learning Self-Driving Cars

"Good Robot!": Efficient Reinforcement Learning for Multi-Step Visual Tasks with Sim to Real Transfer

1 code implementation25 Sep 2019 Andrew Hundt, Benjamin Killeen, Nicholas Greene, Hongtao Wu, Heeyeon Kwon, Chris Paxton, Gregory D. Hager

We are able to create real stacks in 100% of trials with 61% efficiency and real rows in 100% of trials with 59% efficiency by directly loading the simulation-trained model on the real robot with no additional real-world fine-tuning.

reinforcement-learning Reinforcement Learning (RL)

Prospection: Interpretable Plans From Language By Predicting the Future

no code implementations20 Mar 2019 Chris Paxton, Yonatan Bisk, Jesse Thomason, Arunkumar Byravan, Dieter Fox

High-level human instructions often correspond to behaviors with multiple implicit steps.

The CoSTAR Block Stacking Dataset: Learning with Workspace Constraints

3 code implementations27 Oct 2018 Andrew Hundt, Varun Jain, Chia-Hung Lin, Chris Paxton, Gregory D. Hager

We show that a mild relaxation of the task and workspace constraints implicit in existing object grasping datasets can cause neural network based grasping algorithms to fail on even a simple block stacking task when executed under more realistic circumstances.

6D Pose Estimation using RGBD Industrial Robots +5

Visual Robot Task Planning

1 code implementation30 Mar 2018 Chris Paxton, Yotam Barnoy, Kapil Katyal, Raman Arora, Gregory D. Hager

In this work, we propose a neural network architecture and associated planning algorithm that (1) learns a representation of the world useful for generating prospective futures after the application of high-level actions, (2) uses this generative model to simulate the result of sequences of high-level actions in a variety of environments, and (3) uses this same representation to evaluate these actions and perform tree search to find a sequence of high-level actions in a new environment.

Imitation Learning Robot Task Planning

Occupancy Map Prediction Using Generative and Fully Convolutional Networks for Vehicle Navigation

no code implementations6 Mar 2018 Kapil Katyal, Katie Popek, Chris Paxton, Joseph Moore, Kevin Wolfe, Philippe Burlina, Gregory D. Hager

In these situations, the robot's ability to reason about its future motion is often severely limited by sensor field of view (FOV).

Navigate SSIM

Learning to Imagine Manipulation Goals for Robot Task Planning

no code implementations8 Nov 2017 Chris Paxton, Kapil Katyal, Christian Rupprecht, Raman Arora, Gregory D. Hager

Ideally, we would combine the ability of machine learning to leverage big data for learning the semantics of a task, while using techniques from task planning to reliably generalize to new environment.

Robot Task Planning

Temporal and Physical Reasoning for Perception-Based Robotic Manipulation

2 code implementations11 Oct 2017 Felix Jonathan, Chris Paxton, Gregory D. Hager

Accurate knowledge of object poses is crucial to successful robotic manipulation tasks, and yet most current approaches only work in laboratory settings.


Combining Neural Networks and Tree Search for Task and Motion Planning in Challenging Environments

no code implementations22 Mar 2017 Chris Paxton, Vasumathi Raman, Gregory D. Hager, Marin Kobilarov

This paper investigates the ability of neural networks to learn both LTL constraints and control policies in order to generate task plans in complex environments.


Cannot find the paper you are looking for? You can Submit a new open access paper.