Search Results for author: Hemang Chawla

Found 10 papers, 7 papers with code

Transformers in Unsupervised Structure-from-Motion

1 code implementation16 Dec 2023 Hemang Chawla, Arnav Varma, Elahe Arani, Bahram Zonooz

Transformers have revolutionized deep learning based computer vision with improved performance as well as robustness to natural corruptions and adversarial attacks.

Decision Making Image Classification +4

Continual Learning of Unsupervised Monocular Depth from Videos

1 code implementation4 Nov 2023 Hemang Chawla, Arnav Varma, Elahe Arani, Bahram Zonooz

Spatial scene understanding, including monocular depth estimation, is an important problem in various applications, such as robotics and autonomous driving.

Autonomous Driving Continual Learning +4

Adversarial Attacks on Monocular Pose Estimation

1 code implementation14 Jul 2022 Hemang Chawla, Arnav Varma, Elahe Arani, Bahram Zonooz

While studies evaluating the impact of adversarial attacks on monocular depth estimation exist, a systematic demonstration and analysis of adversarial perturbations against pose estimation are lacking.

Monocular Depth Estimation Object Detection +3

Transformers in Self-Supervised Monocular Depth Estimation with Unknown Camera Intrinsics

1 code implementation7 Feb 2022 Arnav Varma, Hemang Chawla, Bahram Zonooz, Elahe Arani

While recent works have compared transformers against their CNN counterparts for tasks such as image classification, no study exists that investigates the impact of using transformers for self-supervised monocular depth estimation.

Autonomous Driving Depth Prediction +3

Crowdsourced 3D Mapping: A Combined Multi-View Geometry and Self-Supervised Learning Approach

1 code implementation25 Jul 2020 Hemang Chawla, Matti Jukola, Terence Brouns, Elahe Arani, Bahram Zonooz

The ability to efficiently utilize crowdsourced visual data carries immense potential for the domains of large scale dynamic mapping and autonomous driving.

Autonomous Driving Motion Estimation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.