Search Results for author: Yann Labbé

Found 7 papers, 4 papers with code

FoundPose: Unseen Object Pose Estimation with Foundation Features

no code implementations30 Nov 2023 Evin Pınar Örnek, Yann Labbé, Bugra Tekin, Lingni Ma, Cem Keskin, Christian Forster, Tomas Hodan

Pose hypotheses are then generated from 2D-3D correspondences established by matching DINOv2 patch features between the query image and a retrieved template, and finally optimized by featuremetric refinement.

6D Pose Estimation Object +1

FocalPose++: Focal Length and Object Pose Estimation via Render and Compare

1 code implementation15 Nov 2023 Martin Cífka, Georgy Ponimatkin, Yann Labbé, Bryan Russell, Mathieu Aubry, Vladimir Petrik, Josef Sivic

We introduce FocalPose++, a neural render-and-compare method for jointly estimating the camera-object 6D pose and camera focal length given a single RGB input image depicting a known object.

Object Pose Estimation

MegaPose: 6D Pose Estimation of Novel Objects via Render & Compare

no code implementations13 Dec 2022 Yann Labbé, Lucas Manuelli, Arsalan Mousavian, Stephen Tyree, Stan Birchfield, Jonathan Tremblay, Justin Carpentier, Mathieu Aubry, Dieter Fox, Josef Sivic

Second, we introduce a novel approach for coarse pose estimation which leverages a network trained to classify whether the pose error between a synthetic rendering and an observed image of the same object can be corrected by the refiner.

6D Pose Estimation Object

Focal Length and Object Pose Estimation via Render and Compare

2 code implementations CVPR 2022 Georgy Ponimatkin, Yann Labbé, Bryan Russell, Mathieu Aubry, Josef Sivic

We introduce FocalPose, a neural render-and-compare method for jointly estimating the camera-object 6D pose and camera focal length given a single RGB input image depicting a known object.

Object Pose Estimation +1

Single-view robot pose and joint angle estimation via render & compare

no code implementations CVPR 2021 Yann Labbé, Justin Carpentier, Mathieu Aubry, Josef Sivic

We introduce RoboPose, a method to estimate the joint angles and the 6D camera-to-robot pose of a known articulated robot from a single RGB image.

Pose Estimation Robot Pose Estimation

CosyPose: Consistent multi-view multi-object 6D pose estimation

3 code implementations ECCV 2020 Yann Labbé, Justin Carpentier, Mathieu Aubry, Josef Sivic

Second, we develop a robust method for matching individual 6D object pose hypotheses across different input images in order to jointly estimate camera viewpoints and 6D poses of all objects in a single consistent scene.

6D Pose Estimation 6D Pose Estimation using RGB +1

Monte-Carlo Tree Search for Efficient Visually Guided Rearrangement Planning

2 code implementations23 Apr 2019 Yann Labbé, Sergey Zagoruyko, Igor Kalevatykh, Ivan Laptev, Justin Carpentier, Mathieu Aubry, Josef Sivic

We address the problem of visually guided rearrangement planning with many movable objects, i. e., finding a sequence of actions to move a set of objects from an initial arrangement to a desired one, while relying on visual inputs coming from an RGB camera.

Cannot find the paper you are looking for? You can Submit a new open access paper.