Search Results for author: Rohit Girdhar

Found 38 papers, 25 papers with code

SoundingActions: Learning How Actions Sound from Narrated Egocentric Videos

no code implementations CVPR 2024 Changan Chen, Kumar Ashutosh, Rohit Girdhar, David Harwath, Kristen Grauman

We propose a novel self-supervised embedding to learn how actions sound from narrated in-the-wild egocentric videos.

Generating Illustrated Instructions

1 code implementation CVPR 2024 Sachit Menon, Ishan Misra, Rohit Girdhar

We introduce the new task of generating Illustrated Instructions, i. e., visual instructions customized to a user's needs.

Text-to-Image Generation

Emu Video: Factorizing Text-to-Video Generation by Explicit Image Conditioning

no code implementations17 Nov 2023 Rohit Girdhar, Mannat Singh, Andrew Brown, Quentin Duval, Samaneh Azadi, Sai Saketh Rambhatla, Akbar Shah, Xi Yin, Devi Parikh, Ishan Misra

We present Emu Video, a text-to-video generation model that factorizes the generation into two steps: first generating an image conditioned on the text, and then generating a video conditioned on the text and the generated image.

Text-to-Video Generation Video Generation

VideoCutLER: Surprisingly Simple Unsupervised Video Instance Segmentation

1 code implementation CVPR 2024 Xudong Wang, Ishan Misra, Ziyun Zeng, Rohit Girdhar, Trevor Darrell

Existing approaches to unsupervised video instance segmentation typically rely on motion estimates and experience difficulties tracking small or divergent motions.

Instance Segmentation Optical Flow Estimation +5

ImageBind: One Embedding Space To Bind Them All

1 code implementation CVPR 2023 Rohit Girdhar, Alaaeldin El-Nouby, Zhuang Liu, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, Ishan Misra

We show that all combinations of paired data are not necessary to train such a joint embedding, and only image-paired data is sufficient to bind the modalities together.

Cross-Modal Retrieval Multimodal Deep Learning +10

Learning to Substitute Ingredients in Recipes

1 code implementation15 Feb 2023 Bahare Fatemi, Quentin Duval, Rohit Girdhar, Michal Drozdzal, Adriana Romero-Soriano

Recipe personalization through ingredient substitution has the potential to help people meet their dietary needs and preferences, avoid potential allergens, and ease culinary exploration in everyone's kitchen.

Recipe Generation

What You Say Is What You Show: Visual Narration Detection in Instructional Videos

no code implementations5 Jan 2023 Kumar Ashutosh, Rohit Girdhar, Lorenzo Torresani, Kristen Grauman

Narrated ''how-to'' videos have emerged as a promising data source for a wide range of learning problems, from learning visual representations to training robot policies.

HierVL: Learning Hierarchical Video-Language Embeddings

1 code implementation CVPR 2023 Kumar Ashutosh, Rohit Girdhar, Lorenzo Torresani, Kristen Grauman

Video-language embeddings are a promising avenue for injecting semantics into visual representations, but existing methods capture only short-term associations between seconds-long video clips and their accompanying text.

Action Classification Action Recognition +3

OmniMAE: Single Model Masked Pretraining on Images and Videos

1 code implementation CVPR 2023 Rohit Girdhar, Alaaeldin El-Nouby, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, Ishan Misra

Furthermore, this model can be learned by dropping 90% of the image and 95% of the video patches, enabling extremely fast training of huge model architectures.

Omnivore: A Single Model for Many Visual Modalities

2 code implementations CVPR 2022 Rohit Girdhar, Mannat Singh, Nikhila Ravi, Laurens van der Maaten, Armand Joulin, Ishan Misra

Prior work has studied different visual modalities in isolation and developed separate architectures for recognition of images, videos, and 3D data.

 Ranked #1 on Scene Recognition on SUN-RGBD (using extra training data)

Action Classification Action Recognition +3

Detecting Twenty-thousand Classes using Image-level Supervision

1 code implementation7 Jan 2022 Xingyi Zhou, Rohit Girdhar, Armand Joulin, Philipp Krähenbühl, Ishan Misra

For the first time, we train a detector with all the twenty-one-thousand classes of the ImageNet dataset and show that it generalizes to new datasets without finetuning.

Cross-Domain Few-Shot Object Detection Image Classification +1

Mask2Former for Video Instance Segmentation

5 code implementations20 Dec 2021 Bowen Cheng, Anwesa Choudhuri, Ishan Misra, Alexander Kirillov, Rohit Girdhar, Alexander G. Schwing

We find Mask2Former also achieves state-of-the-art performance on video instance segmentation without modifying the architecture, the loss or even the training pipeline.

Image Segmentation Instance Segmentation +5

Ego4D: Around the World in 3,000 Hours of Egocentric Video

8 code implementations CVPR 2022 Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, Miguel Martin, Tushar Nagarajan, Ilija Radosavovic, Santhosh Kumar Ramakrishnan, Fiona Ryan, Jayant Sharma, Michael Wray, Mengmeng Xu, Eric Zhongcong Xu, Chen Zhao, Siddhant Bansal, Dhruv Batra, Vincent Cartillier, Sean Crane, Tien Do, Morrie Doulaty, Akshay Erapalli, Christoph Feichtenhofer, Adriano Fragomeni, Qichen Fu, Abrham Gebreselasie, Cristina Gonzalez, James Hillis, Xuhua Huang, Yifei HUANG, Wenqi Jia, Weslie Khoo, Jachym Kolar, Satwik Kottur, Anurag Kumar, Federico Landini, Chao Li, Yanghao Li, Zhenqiang Li, Karttikeya Mangalam, Raghava Modhugu, Jonathan Munro, Tullie Murrell, Takumi Nishiyasu, Will Price, Paola Ruiz Puentes, Merey Ramazanova, Leda Sari, Kiran Somasundaram, Audrey Southerland, Yusuke Sugano, Ruijie Tao, Minh Vo, Yuchen Wang, Xindi Wu, Takuma Yagi, Ziwei Zhao, Yunyi Zhu, Pablo Arbelaez, David Crandall, Dima Damen, Giovanni Maria Farinella, Christian Fuegen, Bernard Ghanem, Vamsi Krishna Ithapu, C. V. Jawahar, Hanbyul Joo, Kris Kitani, Haizhou Li, Richard Newcombe, Aude Oliva, Hyun Soo Park, James M. Rehg, Yoichi Sato, Jianbo Shi, Mike Zheng Shou, Antonio Torralba, Lorenzo Torresani, Mingfei Yan, Jitendra Malik

We introduce Ego4D, a massive-scale egocentric video dataset and benchmark suite.

De-identification Ethics

Anticipative Video Transformer

1 code implementation ICCV 2021 Rohit Girdhar, Kristen Grauman

We propose Anticipative Video Transformer (AVT), an end-to-end attention-based video modeling architecture that attends to the previously observed video in order to anticipate future actions.

Ranked #2 on Action Anticipation on EPIC-KITCHENS-100 (test) (using extra training data)

Action Anticipation

3D Spatial Recognition without Spatially Labeled 3D

1 code implementation CVPR 2021 Zhongzheng Ren, Ishan Misra, Alexander G. Schwing, Rohit Girdhar

We introduce WyPR, a Weakly-supervised framework for Point cloud Recognition, requiring only scene-level class tags as supervision.

3D Object Detection Multiple Instance Learning +5

Physical Reasoning Using Dynamics-Aware Models

1 code implementation20 Feb 2021 Eltayeb Ahmed, Anton Bakhtin, Laurens van der Maaten, Rohit Girdhar

A common approach to solving physical reasoning tasks is to train a value learner on example tasks.

Visual Reasoning

Self-Supervised Pretraining of 3D Features on any Point-Cloud

1 code implementation ICCV 2021 Zaiwei Zhang, Rohit Girdhar, Armand Joulin, Ishan Misra

Pretraining on large labeled datasets is a prerequisite to achieve good performance in many computer vision tasks like 2D object recognition, video classification etc.

Object object-detection +4

Forward Prediction for Physical Reasoning

1 code implementation18 Jun 2020 Rohit Girdhar, Laura Gustafson, Aaron Adcock, Laurens van der Maaten

Physical reasoning requires forward prediction: the ability to forecast what will happen next given some initial world state.

Visual Reasoning

Video Understanding as Machine Translation

no code implementations12 Jun 2020 Bruno Korbar, Fabio Petroni, Rohit Girdhar, Lorenzo Torresani

With the advent of large-scale multimodal video datasets, especially sequences with audio or transcribed speech, there has been a growing interest in self-supervised learning of video representations.

Machine Translation Metric Learning +6

CATER: A diagnostic dataset for Compositional Actions & TEmporal Reasoning

no code implementations ICLR 2020 Rohit Girdhar, Deva Ramanan

In this work, we build a video dataset with fully observable and controllable object and scene bias, and which truly requires spatiotemporal understanding in order to be solved.

Object Video Understanding

Are we asking the right questions in MovieQA?

no code implementations8 Nov 2019 Bhavan Jasani, Rohit Girdhar, Deva Ramanan

Joint vision and language tasks like visual question answering are fascinating because they explore high-level understanding, but at the same time, can be more prone to language biases.

Question Answering Visual Question Answering

MetaPix: Few-Shot Video Retargeting

no code implementations ICLR 2020 Jessica Lee, Deva Ramanan, Rohit Girdhar

We address the task of unsupervised retargeting of human actions from one video to another.

Meta-Learning

CATER: A diagnostic dataset for Compositional Actions and TEmporal Reasoning

2 code implementations10 Oct 2019 Rohit Girdhar, Deva Ramanan

In this work, we build a video dataset with fully observable and controllable object and scene bias, and which truly requires spatiotemporal understanding in order to be solved.

Object Video Object Tracking +1

DistInit: Learning Video Representations Without a Single Labeled Video

no code implementations ICCV 2019 Rohit Girdhar, Du Tran, Lorenzo Torresani, Deva Ramanan

In this work, we propose an alternative approach to learning video representations that require no semantically labeled videos and instead leverages the years of effort in collecting and labeling large and clean still-image datasets.

Ranked #71 on Action Recognition on HMDB-51 (using extra training data)

Action Recognition Temporal Action Localization +1

Binge Watching: Scaling Affordance Learning from Sitcoms

no code implementations CVPR 2017 Xiaolong Wang, Rohit Girdhar, Abhinav Gupta

In this paper, we tackle the challenge of creating one of the biggest dataset for learning affordances.

Detect-and-Track: Efficient Pose Estimation in Videos

1 code implementation CVPR 2018 Rohit Girdhar, Georgia Gkioxari, Lorenzo Torresani, Manohar Paluri, Du Tran

This paper addresses the problem of estimating and tracking human body keypoints in complex, multi-person video.

Ranked #8 on Pose Tracking on PoseTrack2017 (using extra training data)

Human Detection Keypoint Estimation +4

Attentional Pooling for Action Recognition

1 code implementation NeurIPS 2017 Rohit Girdhar, Deva Ramanan

We introduce a simple yet surprisingly powerful model to incorporate attention in action recognition and human object interaction tasks.

Action Recognition Human-Object Interaction Detection +1

ActionVLAD: Learning spatio-temporal aggregation for action classification

no code implementations CVPR 2017 Rohit Girdhar, Deva Ramanan, Abhinav Gupta, Josef Sivic, Bryan Russell

In this work, we introduce a new video representation for action classification that aggregates local convolutional features across the entire spatio-temporal extent of the video.

Action Classification Classification +3

Learning a Predictable and Generative Vector Representation for Objects

2 code implementations29 Mar 2016 Rohit Girdhar, David F. Fouhey, Mikel Rodriguez, Abhinav Gupta

The network consists of two components: (a) an autoencoder that ensures the representation is generative; and (b) a convolutional network that ensures the representation is predictable.

Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.