Search Results for author: Abhinav Moudgil

Found 6 papers, 3 papers with code

Can We Learn Communication-Efficient Optimizers?

no code implementations2 Dec 2023 Charles-Étienne Joseph, Benjamin Thérien, Abhinav Moudgil, Boris Knyazev, Eugene Belilovsky

Although many variants of these approaches have been proposed, they can sometimes lag behind state-of-the-art adaptive optimizers for deep learning.

Language Modelling

Towards Scaling Difference Target Propagation by Learning Backprop Targets

1 code implementation31 Jan 2022 Maxence Ernoult, Fabrice Normandin, Abhinav Moudgil, Sean Spinney, Eugene Belilovsky, Irina Rish, Blake Richards, Yoshua Bengio

As such, it is important to explore learning algorithms that come with strong theoretical guarantees and can match the performance of backpropagation (BP) on complex tasks.

SOAT: A Scene- and Object-Aware Transformer for Vision-and-Language Navigation

no code implementations NeurIPS 2021 Abhinav Moudgil, Arjun Majumdar, Harsh Agrawal, Stefan Lee, Dhruv Batra

Natural language instructions for visual navigation often use scene descriptions (e. g., "bedroom") and object references (e. g., "green chairs") to provide a breadcrumb trail to a goal location.

Object Scene Classification +2

Contrast and Classify: Training Robust VQA Models

1 code implementation ICCV 2021 Yash Kant, Abhinav Moudgil, Dhruv Batra, Devi Parikh, Harsh Agrawal

Recent Visual Question Answering (VQA) models have shown impressive performance on the VQA benchmark but remain sensitive to small linguistic variations in input questions.

Contrastive Learning Data Augmentation +4

Exploring 3 R's of Long-term Tracking: Re-detection, Recovery and Reliability

no code implementations27 Oct 2019 Shyamgopal Karthik, Abhinav Moudgil, Vineet Gandhi

Recent works have proposed several long term tracking benchmarks and highlight the importance of moving towards long-duration tracking to bridge the gap with application requirements.

Long-Term Visual Object Tracking Benchmark

1 code implementation4 Dec 2017 Abhinav Moudgil, Vineet Gandhi

We propose a new long video dataset (called Track Long and Prosper - TLP) and benchmark for single object tracking.

Object Visual Object Tracking +1

Cannot find the paper you are looking for? You can Submit a new open access paper.