Search Results for author: Mohamed El Banani

Found 7 papers, 6 papers with code

Probing the 3D Awareness of Visual Foundation Models

1 code implementation CVPR 2024 Mohamed El Banani, Amit Raj, Kevis-Kokitsi Maninis, Abhishek Kar, Yuanzhen Li, Michael Rubinstein, Deqing Sun, Leonidas Guibas, Justin Johnson, Varun Jampani

Given that such models can classify, delineate, and localize objects in 2D, we ask whether they also represent their 3D structure?

Learning Visual Representations via Language-Guided Sampling

1 code implementation CVPR 2023 Mohamed El Banani, Karan Desai, Justin Johnson

Our approach diverges from image-based contrastive learning by sampling view pairs using language similarity instead of hand-crafted augmentations or learned clusters.

Contrastive Learning Representation Learning

Self-Supervised Correspondence Estimation via Multiview Registration

1 code implementation6 Dec 2022 Mohamed El Banani, Ignacio Rocco, David Novotny, Andrea Vedaldi, Natalia Neverova, Justin Johnson, Benjamin Graham

To address this, we propose a self-supervised approach for correspondence estimation that learns from multiview consistency in short RGB-D video sequences.

Bootstrap Your Own Correspondences

no code implementations ICCV 2021 Mohamed El Banani, Justin Johnson

Our approach combines classic ideas from point cloud registration with more recent representation learning approaches.

Point Cloud Registration Representation Learning

UnsupervisedR&R: Unsupervised Point Cloud Registration via Differentiable Rendering

1 code implementation CVPR 2021 Mohamed El Banani, Luya Gao, Justin Johnson

Aligning partial views of a scene into a single whole is essential to understanding one's environment and is a key component of numerous robotics tasks such as SLAM and SfM.

Point Cloud Registration

Novel Object Viewpoint Estimation through Reconstruction Alignment

1 code implementation CVPR 2020 Mohamed El Banani, Jason J. Corso, David F. Fouhey

Our key insight is that although we do not have an explicit 3D model or a predefined canonical pose, we can still learn to estimate the object's shape in the viewer's frame and then use an image to provide our reference model or canonical pose.

Image-to-Image Translation Object +1

Adviser Networks: Learning What Question to Ask for Human-In-The-Loop Viewpoint Estimation

1 code implementation5 Feb 2018 Mohamed El Banani, Jason J. Corso

We address this question by formulating it as an Adviser Problem: can we learn a mapping from the input to a specific question to ask the human to maximize the expected positive impact to the overall task?

Viewpoint Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.