Search Results for author: Jonathan Tremblay

Found 14 papers, 8 papers with code

Single-stage Keypoint-based Category-level Object Pose Estimation from an RGB Image

no code implementations13 Sep 2021 Yunzhi Lin, Jonathan Tremblay, Stephen Tyree, Patricio A. Vela, Stan Birchfield

Prior work on 6-DoF object pose estimation has largely focused on instance-level processing, in which a textured CAD model is available for each object being detected.

2D Object Detection Pose Estimation

NViSII: A Scriptable Tool for Photorealistic Image Generation

2 code implementations28 May 2021 Nathan Morrical, Jonathan Tremblay, Yunzhi Lin, Stephen Tyree, Stan Birchfield, Valerio Pascucci, Ingo Wald

We present a Python-based renderer built on NVIDIA's OptiX ray tracing engine and the OptiX AI denoiser, designed to generate high-quality synthetic images for research in computer vision and deep learning.

Image Generation Optical Flow Estimation +1

Fast Uncertainty Quantification for Deep Object Pose Estimation

no code implementations16 Nov 2020 Guanya Shi, Yifeng Zhu, Jonathan Tremblay, Stan Birchfield, Fabio Ramos, Animashree Anandkumar, Yuke Zhu

Deep learning-based object pose estimators are often unreliable and overconfident especially when the input image is outside the training domain, for instance, with sim2real transfer.

Pose Estimation

Indirect Object-to-Robot Pose Estimation from an External Monocular RGB Camera

1 code implementation26 Aug 2020 Jonathan Tremblay, Stephen Tyree, Terry Mosier, Stan Birchfield

We present a robotic grasping system that uses a single external monocular RGB camera as input.

Robotics

Guided Uncertainty-Aware Policy Optimization: Combining Learning and Model-Based Strategies for Sample-Efficient Policy Learning

no code implementations21 May 2020 Michelle A. Lee, Carlos Florensa, Jonathan Tremblay, Nathan Ratliff, Animesh Garg, Fabio Ramos, Dieter Fox

Traditional robotic approaches rely on an accurate model of the environment, a detailed description of how to perform the task, and a robust perception system to keep track of the current state.

Camera-to-Robot Pose Estimation from a Single Image

2 code implementations21 Nov 2019 Timothy E. Lee, Jonathan Tremblay, Thang To, Jia Cheng, Terry Mosier, Oliver Kroemer, Dieter Fox, Stan Birchfield

We show experimental results for three different camera sensors, demonstrating that our approach is able to achieve accuracy with a single frame that is better than that of classic off-line hand-eye calibration using multiple frames.

Robotics

Contextual Reinforcement Learning of Visuo-tactile Multi-fingered Grasping Policies

no code implementations21 Nov 2019 Visak Kumar, Tucker Herman, Dieter Fox, Stan Birchfield, Jonathan Tremblay

We propose a Grasping Objects Approach for Tactile (GOAT) robotic hands, learning to overcome the reality gap problem.

Robotics

Few-Shot Viewpoint Estimation

no code implementations13 May 2019 Hung-Yu Tseng, Shalini De Mello, Jonathan Tremblay, Sifei Liu, Stan Birchfield, Ming-Hsuan Yang, Jan Kautz

Through extensive experimentation on the ObjectNet3D and Pascal3D+ benchmark datasets, we demonstrate that our framework, which we call MetaView, significantly outperforms fine-tuning the state-of-the-art models with few examples, and that the specific architectural innovations of our method are crucial to achieving good performance.

Fine-tuning Meta-Learning +1

Deep Object Pose Estimation for Semantic Robotic Grasping of Household Objects

7 code implementations27 Sep 2018 Jonathan Tremblay, Thang To, Balakumar Sundaralingam, Yu Xiang, Dieter Fox, Stan Birchfield

Using synthetic data generated in this manner, we introduce a one-shot deep neural network that is able to perform competitively against a state-of-the-art network trained on a combination of real and synthetic data.

Robotics

Synthetically Trained Neural Networks for Learning Human-Readable Plans from Real-World Demonstrations

1 code implementation18 May 2018 Jonathan Tremblay, Thang To, Artem Molchanov, Stephen Tyree, Jan Kautz, Stan Birchfield

We present a system to infer and execute a human-readable program from a real-world demonstration.

Robotics

Falling Things: A Synthetic Dataset for 3D Object Detection and Pose Estimation

no code implementations18 Apr 2018 Jonathan Tremblay, Thang To, Stan Birchfield

We present a new dataset, called Falling Things (FAT), for advancing the state-of-the-art in object detection and 3D pose estimation in the context of robotics.

3D Object Detection 3D Pose Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.