no code implementations • 15 Jan 2013 • Judy Hoffman, Erik Rodner, Jeff Donahue, Trevor Darrell, Kate Saenko
We present an algorithm that learns representations which explicitly compensate for domain mismatch and which can be efficiently realized as linear classifiers.
no code implementations • CVPR 2013 • Jeff Donahue, Judy Hoffman, Erik Rodner, Kate Saenko, Trevor Darrell
Most successful object classification and detection methods rely on classifiers trained on large labeled datasets.
no code implementations • 20 Aug 2013 • Erik Rodner, Judy Hoffman, Jeff Donahue, Trevor Darrell, Kate Saenko
Images seen during test time are often not from the same distribution as images used for learning.
8 code implementations • 6 Oct 2013 • Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, Trevor Darrell
We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be re-purposed to novel generic tasks.
no code implementations • 21 Dec 2013 • Judy Hoffman, Eric Tzeng, Jeff Donahue, Yangqing Jia, Kate Saenko, Trevor Darrell
In other words, are deep CNNs trained on large amounts of labeled data as susceptible to dataset bias as previous methods have been shown to be?
no code implementations • CVPR 2014 • Judy Hoffman, Trevor Darrell, Kate Saenko
The classic domain adaptation paradigm considers the world to be separated into stationary domains with clear boundaries between them.
1 code implementation • NeurIPS 2014 • Judy Hoffman, Sergio Guadarrama, Eric Tzeng, Ronghang Hu, Jeff Donahue, Ross Girshick, Trevor Darrell, Kate Saenko
A major challenge in scaling object detection is the difficulty of obtaining labeled images for large numbers of categories.
no code implementations • CVPR 2015 • Judy Hoffman, Deepak Pathak, Trevor Darrell, Kate Saenko
We develop methods for detector learning which exploit joint training over both weak and strong labels and which transfer learned perceptual representations from strongly-labeled auxiliary tasks.
7 code implementations • 10 Dec 2014 • Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, Trevor Darrell
Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark.
Ranked #6 on Domain Adaptation on Office-Caltech
1 code implementation • CVPR 2016 • Saurabh Gupta, Judy Hoffman, Jitendra Malik
In this work we propose a technique that transfers supervision between images from different modalities.
1 code implementation • ICCV 2015 • Eric Tzeng, Judy Hoffman, Trevor Darrell, Kate Saenko
Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias.
no code implementations • ICCV 2015 • Damian Mrowca, Marcus Rohrbach, Judy Hoffman, Ronghang Hu, Kate Saenko, Trevor Darrell
Our approach proves to be especially useful in large scale settings with thousands of classes, where spatial and semantic interactions are very frequent and only weakly supervised detectors can be built due to a lack of bounding box annotations.
no code implementations • 16 Oct 2015 • Oscar Beijbom, Judy Hoffman, Evan Yao, Trevor Darrell, Alberto Rodriguez-Ramirez, Manuel Gonzalez-Rivero, Ove Hoegh - Guldberg
Quantification is the task of estimating the class-distribution of a data-set.
no code implementations • 23 Nov 2015 • Eric Tzeng, Coline Devin, Judy Hoffman, Chelsea Finn, Pieter Abbeel, Sergey Levine, Kate Saenko, Trevor Darrell
We propose a novel, more powerful combination of both distribution and pairwise image alignment, and remove the requirement for expensive annotation by using weakly aligned pairs of images in the source and target domains.
no code implementations • 21 May 2016 • Xingchao Peng, Judy Hoffman, Stella X. Yu, Kate Saenko
We address the difficult problem of distinguishing fine-grained object categories in low resolution images.
no code implementations • CVPR 2016 • Judy Hoffman, Saurabh Gupta, Trevor Darrell
Thus, our method transfers information commonly extracted from depth training data to a network which can extract that information from the RGB counterpart.
1 code implementation • 11 Aug 2016 • Evan Shelhamer, Kate Rakelly, Judy Hoffman, Trevor Darrell
Recent years have seen tremendous progress in still-image segmentation; however the na\"ive application of these state-of-the-art algorithms to every video frame requires considerable computation and ignores the temporal continuity inherent in video.
3 code implementations • 8 Dec 2016 • Judy Hoffman, Dequan Wang, Fisher Yu, Trevor Darrell
In this paper, we introduce the first domain adaptive semantic segmentation method, proposing an unsupervised adversarial approach to pixel prediction problems.
Ranked #2 on Image-to-Image Translation on SYNTHIA Fall-to-Winter
20 code implementations • CVPR 2017 • Eric Tzeng, Judy Hoffman, Kate Saenko, Trevor Darrell
Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains.
5 code implementations • ICCV 2017 • Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Judy Hoffman, Li Fei-Fei, C. Lawrence Zitnick, Ross Girshick
Existing methods for visual reasoning attempt to directly map inputs to outputs using black-box architectures without explicitly modeling the underlying reasoning processes.
Ranked #5 on Visual Question Answering (VQA) on CLEVR-Humans
no code implementations • ICCV 2017 • Timnit Gebru, Judy Hoffman, Li Fei-Fei
While fine-grained object recognition is an important problem in computer vision, current models are unlikely to accurately classify objects in the wild.
2 code implementations • 18 Oct 2017 • Xingchao Peng, Ben Usman, Neela Kaushik, Judy Hoffman, Dequan Wang, Kate Saenko
We present the 2017 Visual Domain Adaptation (VisDA) dataset and challenge, a large-scale testbed for unsupervised domain adaptation across visual domains.
3 code implementations • ICML 2018 • Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei A. Efros, Trevor Darrell
Domain adaptation is critical for success in new, unseen environments.
no code implementations • 14 Nov 2017 • Judy Hoffman, Mehryar Mohri, Ningshan Zhang
We present a detailed theoretical analysis of the problem of multiple-source adaptation in the general stochastic scenario, extending known results that assume a single target labeling function.
no code implementations • NeurIPS 2017 • Zelun Luo, Yuliang Zou, Judy Hoffman, Li Fei-Fei
We propose a framework that learns a representation transferable across different domains and tasks in a label efficient manner.
no code implementations • NeurIPS 2017 • Zelun Luo, Yuliang Zou, Judy Hoffman, Li F. Fei-Fei
We propose a framework that learns a representation transferable across different domains and tasks in a data efficient manner.
no code implementations • NeurIPS 2018 • Judy Hoffman, Mehryar Mohri, Ningshan Zhang
This work includes a number of novel contributions for the multiple-source adaptation problem.
no code implementations • 26 Jun 2018 • Xingchao Peng, Ben Usman, Kuniaki Saito, Neela Kaushik, Judy Hoffman, Kate Saenko
In this paper, we present a new large-scale benchmark called Syn2Real, which consists of a synthetic domain rendered from 3D object models and two real-image domains containing the same object categories.
1 code implementation • 21 Feb 2019 • Benjamin Wilson, Judy Hoffman, Jamie Morgenstern
In this work, we investigate whether state-of-the-art object detection systems have equitable predictive performance on pedestrians with different skin tones.
1 code implementation • ICCV 2019 • Daniel Gordon, Abhishek Kadian, Devi Parikh, Judy Hoffman, Dhruv Batra
We propose SplitNet, a method for decoupling visual perception and policy learning.
2 code implementations • ICLR 2020 • Judy Hoffman, Daniel A. Roberts, Sho Yaida
Design of reliable systems must guarantee stability against input perturbations.
1 code implementation • 17 Oct 2019 • Yogesh Balaji, Tom Goldstein, Judy Hoffman
Adversarial training is by far the most successful strategy for improving robustness of neural networks to adversarial attacks.
no code implementations • ICLR 2020 • Erik Wijmans, Julian Straub, Irfan Essa, Dhruv Batra, Judy Hoffman, Ari Morcos
Surprisingly, we find that slight differences in task have no measurable effect on the visual representation for both SqueezeNet and ResNet architectures.
no code implementations • 26 Feb 2020 • Or Litany, Ari Morcos, Srinath Sridhar, Leonidas Guibas, Judy Hoffman
We seek to learn a representation on a large annotated data source that generalizes to a target domain using limited new supervision.
no code implementations • 12 Mar 2020 • Erik Wijmans, Julian Straub, Dhruv Batra, Irfan Essa, Judy Hoffman, Ari Morcos
Recent advances in deep reinforcement learning require a large amount of training data and generally result in representations that are often over specialized to the target task.
2 code implementations • ECCV 2020 • Daniel Bolya, Sean Foley, James Hays, Judy Hoffman
We introduce TIDE, a framework and associated toolbox for analyzing the sources of error in object detection and instance segmentation algorithms.
no code implementations • 25 Aug 2020 • Fu Lin, Rohit Mittapalli, Prithvijit Chattopadhyay, Daniel Bolya, Judy Hoffman
Convolutional Neural Networks have been shown to be vulnerable to adversarial examples, which are known to locate in subspaces close to where normal data lies but are not naturally occurring and of low probability.
1 code implementation • ECCV 2020 • Prithvijit Chattopadhyay, Yogesh Balaji, Judy Hoffman
For domain generalization, the goal is to learn from a set of source domains to produce a single model that will best generalize to an unseen target domain.
Ranked #24 on Domain Generalization on DomainNet
no code implementations • 7 Sep 2020 • Samyak Datta, Oleksandr Maksymets, Judy Hoffman, Stefan Lee, Dhruv Batra, Devi Parikh
This enables a seamless adaption to changing dynamics (a different robot or floor type) by simply re-calibrating the visual odometry model -- circumventing the expense of re-training of the navigation policy.
Ranked #5 on Robot Navigation on Habitat 2020 Point Nav test-std
no code implementations • NeurIPS 2020 • Baifeng Shi, Judy Hoffman, Kate Saenko, Trevor Darrell, Huijuan Xu
By adjusting the auxiliary task weights to minimize the divergence between the surrogate prior and the true prior of the main task, we obtain a more accurate prior estimation, achieving the goal of minimizing the required amount of training data for the main task and avoiding a costly grid search.
1 code implementation • ICCV 2021 • Viraj Prabhu, Arjun Chandrasekaran, Kate Saenko, Judy Hoffman
Generalizing deep neural networks to new target domains is critical to their real-world utility.
no code implementations • ICCV 2021 • Baifeng Shi, Qi Dai, Judy Hoffman, Kate Saenko, Trevor Darrell, Huijuan Xu
We extensively benchmark against the baselines for SSAD and OSAD on our created data splits in THUMOS14 and ActivityNet1. 2, and demonstrate the effectiveness of the proposed UFA and IB methods.
2 code implementations • ICCV 2021 • Viraj Prabhu, Shivam Khare, Deeksha Kartik, Judy Hoffman
Many existing approaches for unsupervised domain adaptation (UDA) focus on adapting under only data distribution shift and offer limited success under additional cross-domain label distribution shift.
Ranked #13 on Domain Adaptation on Office-Home
1 code implementation • ICCV 2021 • Prithvijit Chattopadhyay, Judy Hoffman, Roozbeh Mottaghi, Aniruddha Kembhavi
As an attempt towards assessing the robustness of embodied navigation agents, we propose RobustNav, a framework to quantify the performance of embodied navigation agents when exposed to a wide variety of visual - affecting RGB inputs - and dynamics - affecting transition dynamics - corruptions.
no code implementations • 21 Jul 2021 • Viraj Prabhu, Shivam Khare, Deeksha Kartik, Judy Hoffman
Most modern approaches for domain adaptive semantic segmentation rely on continued access to source data during adaptation, which may be infeasible due to computational or privacy constraints.
1 code implementation • 29 Oct 2021 • Arvindkumar Krishnakumar, Viraj Prabhu, Sruthi Sudhakar, Judy Hoffman
Deep learning models have been shown to learn spurious correlations from data that sometimes lead to systematic failures for certain subpopulations.
1 code implementation • NeurIPS 2021 • Daniel Bolya, Rohit Mittapalli, Judy Hoffman
In this paper, we formalize this setting as "Scalable Diverse Model Selection" and propose several benchmarks for evaluating on this task.
no code implementations • 30 Mar 2022 • Haekyu Park, Seongmin Lee, Benjamin Hoover, Austin P. Wright, Omar Shaikh, Rahul Duggal, Nilaksh Das, Kevin Li, Judy Hoffman, Duen Horng Chau
We present ConceptEvo, a unified interpretation framework for deep neural networks (DNNs) that reveals the inception and evolution of learned concepts during training.
1 code implementation • CVPR 2022 • Seongmin Lee, Zijie J. Wang, Judy Hoffman, Duen Horng Chau
CNN image classifiers are widely used, thanks to their efficiency and accuracy.
no code implementations • 23 Apr 2022 • Viraj Prabhu, Ramprasaath R. Selvaraju, Judy Hoffman, Nikhil Naik
Despite the rapid progress in deep visual recognition, modern computer vision datasets significantly overrepresent the developed world and models trained on such datasets underperform on images from unseen geographies.
1 code implementation • 16 Jun 2022 • Viraj Prabhu, Sriram Yenamandra, Aaditya Singh, Judy Hoffman
Inspired by the design of recent SSL approaches based on learning from partial image inputs generated via masking or cropping -- either by learning to predict the missing pixels, or learning representational invariances to such augmentations -- we propose PACMAC, a simple two-stage adaptation algorithm for self-supervised ViTs.
1 code implementation • 24 Jun 2022 • Arjun Majumdar, Gunjan Aggarwal, Bhavika Devnani, Judy Hoffman, Dhruv Batra
We present a scalable approach for learning open-world object-goal navigation (ObjectNav) -- the task of asking a virtual robot (agent) to find any instance of an object in an unexplored environment (e. g., "find a sink").
no code implementations • 15 Sep 2022 • Daniel Bolya, Cheng-Yang Fu, Xiaoliang Dai, Peizhao Zhang, Judy Hoffman
While transformers have begun to dominate many tasks in vision, applying them to large images is still computationally difficult.
3 code implementations • 17 Oct 2022 • Daniel Bolya, Cheng-Yang Fu, Xiaoliang Dai, Peizhao Zhang, Christoph Feichtenhofer, Judy Hoffman
Off-the-shelf, ToMe can 2x the throughput of state-of-the-art ViT-L @ 512 and ViT-H @ 518 models on images and 2. 2x the throughput of ViT-L on video with only a 0. 2-0. 3% accuracy drop in each case.
Ranked #13 on Efficient ViTs on ImageNet-1K (with DeiT-S)
no code implementations • 20 Nov 2022 • Chia-Wen Kuo, Chih-Yao Ma, Judy Hoffman, Zsolt Kira
In Vision-and-Language Navigation (VLN), researchers typically take an image encoder pre-trained on ImageNet without fine-tuning on the environments that the agent will be trained or tested on.
no code implementations • 25 Nov 2022 • Sachit Kuhar, Alexey Tumanov, Judy Hoffman
Efficient inference of Deep Neural Networks (DNNs) is essential to making AI ubiquitous.
1 code implementation • ICCV 2023 • Prithvijit Chattopadhyay, Kartik Sarangmath, Vivek Vijaykumar, Judy Hoffman
Synthetic data offers the promise of cheap and bountiful training data for settings where labeled real-world data is scarce.
no code implementations • 8 Feb 2023 • Sruthi Sudhakar, Viraj Prabhu, Arvindkumar Krishnakumar, Judy Hoffman
We visualize the feature space of the transformer self-attention modules and discover that a significant portion of the bias is encoded in the query matrix.
no code implementations • 9 Feb 2023 • Viraj Prabhu, David Acuna, Andrew Liao, Rafid Mahmood, Marc T. Law, Judy Hoffman, Sanja Fidler, James Lucas
Sim2Real domain adaptation (DA) research focuses on the constrained setting of adapting from a labeled synthetic source domain to an unlabeled or sparsely labeled real target domain.
1 code implementation • 17 Mar 2023 • Arun V. Reddy, Ketul Shah, William Paul, Rohita Mocharla, Judy Hoffman, Kapil D. Katyal, Dinesh Manocha, Celso M. de Melo, Rama Chellappa
The dataset is composed of both real and synthetic videos from seven gesture classes, and is intended to support the study of synthetic-to-real domain shift for video-based action recognition.
3 code implementations • 30 Mar 2023 • Daniel Bolya, Judy Hoffman
In the process, we speed up image generation by up to 2x and reduce memory consumption by up to 5. 6x.
1 code implementation • ICCV 2023 • Aaditya Singh, Kartik Sarangmath, Prithvijit Chattopadhyay, Judy Hoffman
Robustness to natural distribution shifts has seen remarkable progress thanks to recent pre-training strategies combined with better fine-tuning methods.
1 code implementation • 4 May 2023 • George Stoica, Daniel Bolya, Jakob Bjorner, Pratik Ramesh, Taylor Hearn, Judy Hoffman
While this works for models trained on the same task, we find that this fails to account for the differences in models trained on disjoint tasks.
2 code implementations • NeurIPS 2023 • Viraj Prabhu, Sriram Yenamandra, Prithvijit Chattopadhyay, Judy Hoffman
We propose an automated algorithm to stress-test a trained visual model by generating language-guided counterfactual test images (LANCE).
2 code implementations • 1 Jun 2023 • Chaitanya Ryali, Yuan-Ting Hu, Daniel Bolya, Chen Wei, Haoqi Fan, Po-Yao Huang, Vaibhav Aggarwal, Arkabandhu Chowdhury, Omid Poursaeed, Judy Hoffman, Jitendra Malik, Yanghao Li, Christoph Feichtenhofer
Modern hierarchical vision transformers have added several vision-specific components in the pursuit of supervised classification performance.
Ranked #1 on Image Classification on iNaturalist 2019 (using extra training data)
no code implementations • 7 Jun 2023 • Sruthi Sudhakar, Viraj Prabhu, Olga Russakovsky, Judy Hoffman
As computer vision systems are being increasingly deployed at scale in high-stakes applications like autonomous driving, concerns about social bias in these systems are rising.
no code implementations • 28 Sep 2023 • Benjamin Hoover, Hendrik Strobelt, Dmitry Krotov, Judy Hoffman, Zsolt Kira, Duen Horng Chau
Diffusion Models (DMs) have recently set state-of-the-art on many generation benchmarks.
1 code implementation • ICCV 2023 • Sriram Yenamandra, Pratik Ramesh, Viraj Prabhu, Judy Hoffman
Computer vision datasets frequently contain spurious correlations between task-relevant labels and (easy to learn) latent task-irrelevant attributes (e. g. context).
2 code implementations • NeurIPS 2023 • Micah Goldblum, Hossein Souri, Renkun Ni, Manli Shu, Viraj Prabhu, Gowthami Somepalli, Prithvijit Chattopadhyay, Mark Ibrahim, Adrien Bardes, Judy Hoffman, Rama Chellappa, Andrew Gordon Wilson, Tom Goldstein
Battle of the Backbones (BoB) makes this choice easier by benchmarking a diverse suite of pretrained models, including vision-language models, those trained via self-supervised learning, and the Stable Diffusion backbone, across a diverse set of computer vision tasks ranging from classification to object detection to OOD generalization and more.
no code implementations • 9 Nov 2023 • Daniel Bolya, Chaitanya Ryali, Judy Hoffman, Christoph Feichtenhofer
To fix it, we introduce a simple absolute window position embedding strategy, which solves the bug outright in Hiera and allows us to increase both speed and performance of the model in ViTDet.
no code implementations • 30 Nov 2023 • Kristen Grauman, Andrew Westbury, Lorenzo Torresani, Kris Kitani, Jitendra Malik, Triantafyllos Afouras, Kumar Ashutosh, Vijay Baiyya, Siddhant Bansal, Bikram Boote, Eugene Byrne, Zach Chavis, Joya Chen, Feng Cheng, Fu-Jen Chu, Sean Crane, Avijit Dasgupta, Jing Dong, Maria Escobar, Cristhian Forigua, Abrham Gebreselasie, Sanjay Haresh, Jing Huang, Md Mohaiminul Islam, Suyog Jain, Rawal Khirodkar, Devansh Kukreja, Kevin J Liang, Jia-Wei Liu, Sagnik Majumder, Yongsen Mao, Miguel Martin, Effrosyni Mavroudi, Tushar Nagarajan, Francesco Ragusa, Santhosh Kumar Ramakrishnan, Luigi Seminara, Arjun Somayazulu, Yale Song, Shan Su, Zihui Xue, Edward Zhang, Jinxu Zhang, Angela Castillo, Changan Chen, Xinzhu Fu, Ryosuke Furuta, Cristina Gonzalez, Prince Gupta, Jiabo Hu, Yifei HUANG, Yiming Huang, Weslie Khoo, Anush Kumar, Robert Kuo, Sach Lakhavani, Miao Liu, Mi Luo, Zhengyi Luo, Brighid Meredith, Austin Miller, Oluwatumininu Oguntola, Xiaqing Pan, Penny Peng, Shraman Pramanick, Merey Ramazanova, Fiona Ryan, Wei Shan, Kiran Somasundaram, Chenan Song, Audrey Southerland, Masatoshi Tateno, Huiyu Wang, Yuchen Wang, Takuma Yagi, Mingfei Yan, Xitong Yang, Zecheng Yu, Shengxin Cindy Zha, Chen Zhao, Ziwei Zhao, Zhifan Zhu, Jeff Zhuo, Pablo Arbelaez, Gedas Bertasius, David Crandall, Dima Damen, Jakob Engel, Giovanni Maria Farinella, Antonino Furnari, Bernard Ghanem, Judy Hoffman, C. V. Jawahar, Richard Newcombe, Hyun Soo Park, James M. Rehg, Yoichi Sato, Manolis Savva, Jianbo Shi, Mike Zheng Shou, Michael Wray
We present Ego-Exo4D, a diverse, large-scale multimodal multiview video dataset and benchmark challenge.
no code implementations • 11 Dec 2023 • Prithvijit Chattopadhyay, Bharat Goyal, Boglarka Ecsedi, Viraj Prabhu, Judy Hoffman
Synthetic data (SIM) drawn from simulators have emerged as a popular alternative for training models where acquiring annotated real-world images is difficult.
no code implementations • 11 Dec 2023 • Sahil Khose, Anisha Pal, Aayushi Agarwal, Deepanshi, Judy Hoffman, Prithvijit Chattopadhyay
Real-world aerial scene understanding is limited by a lack of datasets that contain densely annotated images curated under a diverse set of conditions.
1 code implementation • 1 Feb 2024 • Simar Kareer, Vivek Vijaykumar, Harsh Maheshwari, Prithvijit Chattopadhyay, Judy Hoffman, Viraj Prabhu
While the vast majority of prior work has studied this as a frame-level Image-DAS problem, a few Video-DAS works have sought to additionally leverage the temporal signal present in adjacent frames.