1 code implementation • 2 Dec 2022 • Prithvijit Chattopadhyay, Kartik Sarangmath, Vivek Vijaykumar, Judy Hoffman
Synthetic data offers the promise of cheap and bountiful training data for settings where lots of labeled real-world data for tasks is unavailable.
no code implementations • 25 Nov 2022 • Sachit Kuhar, Alexey Tumanov, Judy Hoffman
We propose a new method called signed-binary networks to improve further efficiency (by exploiting both weight sparsity and weight repetition) while maintaining similar accuracy.
no code implementations • 20 Nov 2022 • Chia-Wen Kuo, Chih-Yao Ma, Judy Hoffman, Zsolt Kira
In Vision-and-Language Navigation (VLN), researchers typically take an image encoder pre-trained on ImageNet without fine-tuning on the environments that the agent will be trained or tested on.
1 code implementation • 17 Oct 2022 • Daniel Bolya, Cheng-Yang Fu, Xiaoliang Dai, Peizhao Zhang, Christoph Feichtenhofer, Judy Hoffman
Off-the-shelf, ToMe can 2x the throughput of state-of-the-art ViT-L @ 512 and ViT-H @ 518 models on images and 2. 2x the throughput of ViT-L on video with only a 0. 2-0. 3% accuracy drop in each case.
no code implementations • 15 Sep 2022 • Daniel Bolya, Cheng-Yang Fu, Xiaoliang Dai, Peizhao Zhang, Judy Hoffman
While transformers have begun to dominate many tasks in vision, applying them to large images is still computationally difficult.
no code implementations • 24 Jun 2022 • Arjun Majumdar, Gunjan Aggarwal, Bhavika Devnani, Judy Hoffman, Dhruv Batra
We present a scalable approach for learning open-world object-goal navigation (ObjectNav) -- the task of asking a virtual robot (agent) to find any instance of an object in an unexplored environment (e. g., "find a sink").
1 code implementation • 16 Jun 2022 • Viraj Prabhu, Sriram Yenamandra, Aaditya Singh, Judy Hoffman
Inspired by the design of recent SSL approaches based on learning from partial image inputs generated via masking or cropping -- either by learning to predict the missing pixels, or learning representational invariances to such augmentations -- we propose PACMAC, a simple two-stage adaptation algorithm for self-supervised ViTs.
no code implementations • 23 Apr 2022 • Viraj Prabhu, Ramprasaath R. Selvaraju, Judy Hoffman, Nikhil Naik
Despite the rapid progress in deep visual recognition, modern computer vision datasets significantly overrepresent the developed world and models trained on such datasets underperform on images from unseen geographies.
1 code implementation • CVPR 2022 • Seongmin Lee, Zijie J. Wang, Judy Hoffman, Duen Horng Chau
CNN image classifiers are widely used, thanks to their efficiency and accuracy.
no code implementations • 30 Mar 2022 • Haekyu Park, Seongmin Lee, Benjamin Hoover, Austin Wright, Omar Shaikh, Rahul Duggal, Nilaksh Das, Judy Hoffman, Duen Horng Chau
Deep neural networks (DNNs) have been widely used for decision making, prompting a surge of interest in interpreting how these complex models work.
no code implementations • NeurIPS 2021 • Daniel Bolya, Rohit Mittapalli, Judy Hoffman
In this paper, we formalize this setting as "Scalable Diverse Model Selection" and propose several benchmarks for evaluating on this task.
1 code implementation • 29 Oct 2021 • Arvindkumar Krishnakumar, Viraj Prabhu, Sruthi Sudhakar, Judy Hoffman
Deep learning models have been shown to learn spurious correlations from data that sometimes lead to systematic failures for certain subpopulations.
no code implementations • 21 Jul 2021 • Viraj Prabhu, Shivam Khare, Deeksha Kartik, Judy Hoffman
Most modern approaches for domain adaptive semantic segmentation rely on continued access to source data during adaptation, which may be infeasible due to computational or privacy constraints.
1 code implementation • ICCV 2021 • Prithvijit Chattopadhyay, Judy Hoffman, Roozbeh Mottaghi, Aniruddha Kembhavi
As an attempt towards assessing the robustness of embodied navigation agents, we propose RobustNav, a framework to quantify the performance of embodied navigation agents when exposed to a wide variety of visual - affecting RGB inputs - and dynamics - affecting transition dynamics - corruptions.
1 code implementation • ICCV 2021 • Viraj Prabhu, Shivam Khare, Deeksha Kartik, Judy Hoffman
Many existing approaches for unsupervised domain adaptation (UDA) focus on adapting under only data distribution shift and offer limited success under additional cross-domain label distribution shift.
Ranked #10 on
Domain Adaptation
on Office-Home
no code implementations • ICCV 2021 • Baifeng Shi, Qi Dai, Judy Hoffman, Kate Saenko, Trevor Darrell, Huijuan Xu
We extensively benchmark against the baselines for SSAD and OSAD on our created data splits in THUMOS14 and ActivityNet1. 2, and demonstrate the effectiveness of the proposed UFA and IB methods.
1 code implementation • ICCV 2021 • Viraj Prabhu, Arjun Chandrasekaran, Kate Saenko, Judy Hoffman
Generalizing deep neural networks to new target domains is critical to their real-world utility.
no code implementations • NeurIPS 2020 • Baifeng Shi, Judy Hoffman, Kate Saenko, Trevor Darrell, Huijuan Xu
By adjusting the auxiliary task weights to minimize the divergence between the surrogate prior and the true prior of the main task, we obtain a more accurate prior estimation, achieving the goal of minimizing the required amount of training data for the main task and avoiding a costly grid search.
no code implementations • 7 Sep 2020 • Samyak Datta, Oleksandr Maksymets, Judy Hoffman, Stefan Lee, Dhruv Batra, Devi Parikh
This enables a seamless adaption to changing dynamics (a different robot or floor type) by simply re-calibrating the visual odometry model -- circumventing the expense of re-training of the navigation policy.
Ranked #5 on
Robot Navigation
on Habitat 2020 Point Nav test-std
1 code implementation • ECCV 2020 • Prithvijit Chattopadhyay, Yogesh Balaji, Judy Hoffman
For domain generalization, the goal is to learn from a set of source domains to produce a single model that will best generalize to an unseen target domain.
Ranked #15 on
Domain Generalization
on DomainNet
no code implementations • 25 Aug 2020 • Fu Lin, Rohit Mittapalli, Prithvijit Chattopadhyay, Daniel Bolya, Judy Hoffman
Convolutional Neural Networks have been shown to be vulnerable to adversarial examples, which are known to locate in subspaces close to where normal data lies but are not naturally occurring and of low probability.
1 code implementation • ECCV 2020 • Daniel Bolya, Sean Foley, James Hays, Judy Hoffman
We introduce TIDE, a framework and associated toolbox for analyzing the sources of error in object detection and instance segmentation algorithms.
no code implementations • 12 Mar 2020 • Erik Wijmans, Julian Straub, Dhruv Batra, Irfan Essa, Judy Hoffman, Ari Morcos
Recent advances in deep reinforcement learning require a large amount of training data and generally result in representations that are often over specialized to the target task.
no code implementations • 26 Feb 2020 • Or Litany, Ari Morcos, Srinath Sridhar, Leonidas Guibas, Judy Hoffman
We seek to learn a representation on a large annotated data source that generalizes to a target domain using limited new supervision.
no code implementations • ICLR 2020 • Erik Wijmans, Julian Straub, Irfan Essa, Dhruv Batra, Judy Hoffman, Ari Morcos
Surprisingly, we find that slight differences in task have no measurable effect on the visual representation for both SqueezeNet and ResNet architectures.
1 code implementation • 17 Oct 2019 • Yogesh Balaji, Tom Goldstein, Judy Hoffman
Adversarial training is by far the most successful strategy for improving robustness of neural networks to adversarial attacks.
2 code implementations • ICLR 2020 • Judy Hoffman, Daniel A. Roberts, Sho Yaida
Design of reliable systems must guarantee stability against input perturbations.
1 code implementation • ICCV 2019 • Daniel Gordon, Abhishek Kadian, Devi Parikh, Judy Hoffman, Dhruv Batra
We propose SplitNet, a method for decoupling visual perception and policy learning.
1 code implementation • 21 Feb 2019 • Benjamin Wilson, Judy Hoffman, Jamie Morgenstern
In this work, we investigate whether state-of-the-art object detection systems have equitable predictive performance on pedestrians with different skin tones.
no code implementations • 26 Jun 2018 • Xingchao Peng, Ben Usman, Kuniaki Saito, Neela Kaushik, Judy Hoffman, Kate Saenko
In this paper, we present a new large-scale benchmark called Syn2Real, which consists of a synthetic domain rendered from 3D object models and two real-image domains containing the same object categories.
no code implementations • NeurIPS 2018 • Judy Hoffman, Mehryar Mohri, Ningshan Zhang
This work includes a number of novel contributions for the multiple-source adaptation problem.
no code implementations • NeurIPS 2017 • Zelun Luo, Yuliang Zou, Judy Hoffman, Li F. Fei-Fei
We propose a framework that learns a representation transferable across different domains and tasks in a data efficient manner.
no code implementations • NeurIPS 2017 • Zelun Luo, Yuliang Zou, Judy Hoffman, Li Fei-Fei
We propose a framework that learns a representation transferable across different domains and tasks in a label efficient manner.
no code implementations • 14 Nov 2017 • Judy Hoffman, Mehryar Mohri, Ningshan Zhang
We present a detailed theoretical analysis of the problem of multiple-source adaptation in the general stochastic scenario, extending known results that assume a single target labeling function.
3 code implementations • ICML 2018 • Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei A. Efros, Trevor Darrell
Domain adaptation is critical for success in new, unseen environments.
1 code implementation • 18 Oct 2017 • Xingchao Peng, Ben Usman, Neela Kaushik, Judy Hoffman, Dequan Wang, Kate Saenko
We present the 2017 Visual Domain Adaptation (VisDA) dataset and challenge, a large-scale testbed for unsupervised domain adaptation across visual domains.
no code implementations • ICCV 2017 • Timnit Gebru, Judy Hoffman, Li Fei-Fei
While fine-grained object recognition is an important problem in computer vision, current models are unlikely to accurately classify objects in the wild.
5 code implementations • ICCV 2017 • Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Judy Hoffman, Li Fei-Fei, C. Lawrence Zitnick, Ross Girshick
Existing methods for visual reasoning attempt to directly map inputs to outputs using black-box architectures without explicitly modeling the underlying reasoning processes.
Ranked #5 on
Visual Question Answering
on CLEVR-Humans
18 code implementations • CVPR 2017 • Eric Tzeng, Judy Hoffman, Kate Saenko, Trevor Darrell
Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains.
4 code implementations • 8 Dec 2016 • Judy Hoffman, Dequan Wang, Fisher Yu, Trevor Darrell
In this paper, we introduce the first domain adaptive semantic segmentation method, proposing an unsupervised adversarial approach to pixel prediction problems.
Ranked #2 on
Image-to-Image Translation
on SYNTHIA Fall-to-Winter
1 code implementation • 11 Aug 2016 • Evan Shelhamer, Kate Rakelly, Judy Hoffman, Trevor Darrell
Recent years have seen tremendous progress in still-image segmentation; however the na\"ive application of these state-of-the-art algorithms to every video frame requires considerable computation and ignores the temporal continuity inherent in video.
no code implementations • CVPR 2016 • Judy Hoffman, Saurabh Gupta, Trevor Darrell
Thus, our method transfers information commonly extracted from depth training data to a network which can extract that information from the RGB counterpart.
no code implementations • 21 May 2016 • Xingchao Peng, Judy Hoffman, Stella X. Yu, Kate Saenko
We address the difficult problem of distinguishing fine-grained object categories in low resolution images.
no code implementations • 23 Nov 2015 • Eric Tzeng, Coline Devin, Judy Hoffman, Chelsea Finn, Pieter Abbeel, Sergey Levine, Kate Saenko, Trevor Darrell
We propose a novel, more powerful combination of both distribution and pairwise image alignment, and remove the requirement for expensive annotation by using weakly aligned pairs of images in the source and target domains.
no code implementations • 16 Oct 2015 • Oscar Beijbom, Judy Hoffman, Evan Yao, Trevor Darrell, Alberto Rodriguez-Ramirez, Manuel Gonzalez-Rivero, Ove Hoegh - Guldberg
Quantification is the task of estimating the class-distribution of a data-set.
no code implementations • ICCV 2015 • Damian Mrowca, Marcus Rohrbach, Judy Hoffman, Ronghang Hu, Kate Saenko, Trevor Darrell
Our approach proves to be especially useful in large scale settings with thousands of classes, where spatial and semantic interactions are very frequent and only weakly supervised detectors can be built due to a lack of bounding box annotations.
1 code implementation • ICCV 2015 • Eric Tzeng, Judy Hoffman, Trevor Darrell, Kate Saenko
Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias.
1 code implementation • CVPR 2016 • Saurabh Gupta, Judy Hoffman, Jitendra Malik
In this work we propose a technique that transfers supervision between images from different modalities.
7 code implementations • 10 Dec 2014 • Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, Trevor Darrell
Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark.
Ranked #6 on
Domain Adaptation
on Office-Caltech
no code implementations • CVPR 2015 • Judy Hoffman, Deepak Pathak, Trevor Darrell, Kate Saenko
We develop methods for detector learning which exploit joint training over both weak and strong labels and which transfer learned perceptual representations from strongly-labeled auxiliary tasks.
1 code implementation • NeurIPS 2014 • Judy Hoffman, Sergio Guadarrama, Eric Tzeng, Ronghang Hu, Jeff Donahue, Ross Girshick, Trevor Darrell, Kate Saenko
A major challenge in scaling object detection is the difficulty of obtaining labeled images for large numbers of categories.
no code implementations • CVPR 2014 • Judy Hoffman, Trevor Darrell, Kate Saenko
The classic domain adaptation paradigm considers the world to be separated into stationary domains with clear boundaries between them.
no code implementations • 21 Dec 2013 • Judy Hoffman, Eric Tzeng, Jeff Donahue, Yangqing Jia, Kate Saenko, Trevor Darrell
In other words, are deep CNNs trained on large amounts of labeled data as susceptible to dataset bias as previous methods have been shown to be?
8 code implementations • 6 Oct 2013 • Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, Trevor Darrell
We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be re-purposed to novel generic tasks.
no code implementations • 20 Aug 2013 • Erik Rodner, Judy Hoffman, Jeff Donahue, Trevor Darrell, Kate Saenko
Images seen during test time are often not from the same distribution as images used for learning.
no code implementations • CVPR 2013 • Jeff Donahue, Judy Hoffman, Erik Rodner, Kate Saenko, Trevor Darrell
Most successful object classification and detection methods rely on classifiers trained on large labeled datasets.
no code implementations • 15 Jan 2013 • Judy Hoffman, Erik Rodner, Jeff Donahue, Trevor Darrell, Kate Saenko
We present an algorithm that learns representations which explicitly compensate for domain mismatch and which can be efficiently realized as linear classifiers.