Search Results for author: Jonathan Tremblay

Found 30 papers, 16 papers with code

Diff-DOPE: Differentiable Deep Object Pose Estimation

no code implementations30 Sep 2023 Jonathan Tremblay, Bowen Wen, Valts Blukis, Balakumar Sundaralingam, Stephen Tyree, Stan Birchfield

We introduce Diff-DOPE, a 6-DoF pose refiner that takes as input an image, a 3D textured model of an object, and an initial pose of the object.

Object Pose Estimation +1

Partial-View Object View Synthesis via Filtered Inversion

no code implementations3 Apr 2023 Fan-Yun Sun, Jonathan Tremblay, Valts Blukis, Kevin Lin, Danfei Xu, Boris Ivanovic, Peter Karkus, Stan Birchfield, Dieter Fox, Ruohan Zhang, Yunzhu Li, Jiajun Wu, Marco Pavone, Nick Haber

At inference, given one or more views of a novel real-world object, FINV first finds a set of latent codes for the object by inverting the generative model from multiple initial seeds.

Object

TTA-COPE: Test-Time Adaptation for Category-Level Object Pose Estimation

no code implementations CVPR 2023 Taeyeop Lee, Jonathan Tremblay, Valts Blukis, Bowen Wen, Byeong-Uk Lee, Inkyu Shin, Stan Birchfield, In So Kweon, Kuk-Jin Yoon

Unlike previous unsupervised domain adaptation methods for category-level object pose estimation, our approach processes the test data in a sequential, online manner, and it does not require access to the source domain at runtime.

Object Pose Estimation +2

BundleSDF: Neural 6-DoF Tracking and 3D Reconstruction of Unknown Objects

1 code implementation CVPR 2023 Bowen Wen, Jonathan Tremblay, Valts Blukis, Stephen Tyree, Thomas Muller, Alex Evans, Dieter Fox, Jan Kautz, Stan Birchfield

We present a near real-time method for 6-DoF tracking of an unknown object from a monocular RGBD video sequence, while simultaneously performing neural 3D reconstruction of the object.

3D Object Tracking 3D Reconstruction +5

MegaPose: 6D Pose Estimation of Novel Objects via Render & Compare

no code implementations13 Dec 2022 Yann Labbé, Lucas Manuelli, Arsalan Mousavian, Stephen Tyree, Stan Birchfield, Jonathan Tremblay, Justin Carpentier, Mathieu Aubry, Dieter Fox, Josef Sivic

Second, we introduce a novel approach for coarse pose estimation which leverages a network trained to classify whether the pose error between a synthetic rendering and an observed image of the same object can be corrected by the refiner.

6D Pose Estimation Object

One-Shot Neural Fields for 3D Object Understanding

no code implementations21 Oct 2022 Valts Blukis, Taeyeop Lee, Jonathan Tremblay, Bowen Wen, In So Kweon, Kuk-Jin Yoon, Dieter Fox, Stan Birchfield

At test-time, we build the representation from a single RGB input image observing the scene from only one viewpoint.

3D Reconstruction Object +2

Parallel Inversion of Neural Radiance Fields for Robust Pose Estimation

1 code implementation18 Oct 2022 Yunzhi Lin, Thomas Müller, Jonathan Tremblay, Bowen Wen, Stephen Tyree, Alex Evans, Patricio A. Vela, Stan Birchfield

We present a parallelized optimization method based on fast Neural Radiance Fields (NeRF) for estimating 6-DoF pose of a camera with respect to an object or scene.

Pose Estimation

ProgPrompt: Generating Situated Robot Task Plans using Large Language Models

no code implementations22 Sep 2022 Ishika Singh, Valts Blukis, Arsalan Mousavian, Ankit Goyal, Danfei Xu, Jonathan Tremblay, Dieter Fox, Jesse Thomason, Animesh Garg

To ameliorate that effort, large language models (LLMs) can be used to score potential next actions during task planning, and even generate action sequences directly, given an instruction in natural language with no additional domain information.

Variable Bitrate Neural Fields

1 code implementation15 Jun 2022 Towaki Takikawa, Alex Evans, Jonathan Tremblay, Thomas Müller, Morgan McGuire, Alec Jacobson, Sanja Fidler

Neural approximations of scalar and vector fields, such as signed distance functions and radiance fields, have emerged as accurate, high-quality representations.

Keypoint-Based Category-Level Object Pose Tracking from an RGB Sequence with Uncertainty Estimation

1 code implementation23 May 2022 Yunzhi Lin, Jonathan Tremblay, Stephen Tyree, Patricio A. Vela, Stan Birchfield

We propose a single-stage, category-level 6-DoF pose estimation algorithm that simultaneously detects and tracks instances of objects within a known category.

Pose Estimation Pose Tracking

RTMV: A Ray-Traced Multi-View Synthetic Dataset for Novel View Synthesis

no code implementations14 May 2022 Jonathan Tremblay, Moustafa Meshry, Alex Evans, Jan Kautz, Alexander Keller, Sameh Khamis, Thomas Müller, Charles Loop, Nathan Morrical, Koki Nagano, Towaki Takikawa, Stan Birchfield

We present a large-scale synthetic dataset for novel view synthesis consisting of ~300k images rendered from nearly 2000 complex scenes using high-quality ray tracing at high resolution (1600 x 1600 pixels).

Novel View Synthesis

6-DoF Pose Estimation of Household Objects for Robotic Manipulation: An Accessible Dataset and Benchmark

1 code implementation11 Mar 2022 Stephen Tyree, Jonathan Tremblay, Thang To, Jia Cheng, Terry Mosier, Jeffrey Smith, Stan Birchfield

We propose a set of toy grocery objects, whose physical instantiations are readily available for purchase and are appropriately sized for robotic grasping and manipulation.

Pose Estimation Robotic Grasping

Watch It Move: Unsupervised Discovery of 3D Joints for Re-Posing of Articulated Objects

1 code implementation CVPR 2022 Atsuhiro Noguchi, Umar Iqbal, Jonathan Tremblay, Tatsuya Harada, Orazio Gallo

Rendering articulated objects while controlling their poses is critical to applications such as virtual reality or animation for movies.

Object

Single-Stage Keypoint-Based Category-Level Object Pose Estimation from an RGB Image

1 code implementation13 Sep 2021 Yunzhi Lin, Jonathan Tremblay, Stephen Tyree, Patricio A. Vela, Stan Birchfield

Prior work on 6-DoF object pose estimation has largely focused on instance-level processing, in which a textured CAD model is available for each object being detected.

Object object-detection +2

NViSII: A Scriptable Tool for Photorealistic Image Generation

2 code implementations28 May 2021 Nathan Morrical, Jonathan Tremblay, Yunzhi Lin, Stephen Tyree, Stan Birchfield, Valerio Pascucci, Ingo Wald

We present a Python-based renderer built on NVIDIA's OptiX ray tracing engine and the OptiX AI denoiser, designed to generate high-quality synthetic images for research in computer vision and deep learning.

Image Generation Optical Flow Estimation +1

Fast Uncertainty Quantification for Deep Object Pose Estimation

no code implementations16 Nov 2020 Guanya Shi, Yifeng Zhu, Jonathan Tremblay, Stan Birchfield, Fabio Ramos, Animashree Anandkumar, Yuke Zhu

Deep learning-based object pose estimators are often unreliable and overconfident especially when the input image is outside the training domain, for instance, with sim2real transfer.

Object Pose Estimation +1

Indirect Object-to-Robot Pose Estimation from an External Monocular RGB Camera

1 code implementation26 Aug 2020 Jonathan Tremblay, Stephen Tyree, Terry Mosier, Stan Birchfield

We present a robotic grasping system that uses a single external monocular RGB camera as input.

Robotics

Guided Uncertainty-Aware Policy Optimization: Combining Learning and Model-Based Strategies for Sample-Efficient Policy Learning

no code implementations21 May 2020 Michelle A. Lee, Carlos Florensa, Jonathan Tremblay, Nathan Ratliff, Animesh Garg, Fabio Ramos, Dieter Fox

Traditional robotic approaches rely on an accurate model of the environment, a detailed description of how to perform the task, and a robust perception system to keep track of the current state.

Contextual Reinforcement Learning of Visuo-tactile Multi-fingered Grasping Policies

no code implementations21 Nov 2019 Visak Kumar, Tucker Herman, Dieter Fox, Stan Birchfield, Jonathan Tremblay

We propose a Grasping Objects Approach for Tactile (GOAT) robotic hands, learning to overcome the reality gap problem.

Robotics

Camera-to-Robot Pose Estimation from a Single Image

2 code implementations21 Nov 2019 Timothy E. Lee, Jonathan Tremblay, Thang To, Jia Cheng, Terry Mosier, Oliver Kroemer, Dieter Fox, Stan Birchfield

We show experimental results for three different camera sensors, demonstrating that our approach is able to achieve accuracy with a single frame that is better than that of classic off-line hand-eye calibration using multiple frames.

Robotics

Few-Shot Viewpoint Estimation

no code implementations13 May 2019 Hung-Yu Tseng, Shalini De Mello, Jonathan Tremblay, Sifei Liu, Stan Birchfield, Ming-Hsuan Yang, Jan Kautz

Through extensive experimentation on the ObjectNet3D and Pascal3D+ benchmark datasets, we demonstrate that our framework, which we call MetaView, significantly outperforms fine-tuning the state-of-the-art models with few examples, and that the specific architectural innovations of our method are crucial to achieving good performance.

Meta-Learning Viewpoint Estimation

Deep Object Pose Estimation for Semantic Robotic Grasping of Household Objects

8 code implementations27 Sep 2018 Jonathan Tremblay, Thang To, Balakumar Sundaralingam, Yu Xiang, Dieter Fox, Stan Birchfield

Using synthetic data generated in this manner, we introduce a one-shot deep neural network that is able to perform competitively against a state-of-the-art network trained on a combination of real and synthetic data.

Robotics

Synthetically Trained Neural Networks for Learning Human-Readable Plans from Real-World Demonstrations

1 code implementation18 May 2018 Jonathan Tremblay, Thang To, Artem Molchanov, Stephen Tyree, Jan Kautz, Stan Birchfield

We present a system to infer and execute a human-readable program from a real-world demonstration.

Robotics

Falling Things: A Synthetic Dataset for 3D Object Detection and Pose Estimation

no code implementations18 Apr 2018 Jonathan Tremblay, Thang To, Stan Birchfield

We present a new dataset, called Falling Things (FAT), for advancing the state-of-the-art in object detection and 3D pose estimation in the context of robotics.

3D Object Detection 3D Pose Estimation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.