no code implementations • NeurIPS 2008 • Kate Saenko, Trevor Darrell
Polysemy is a problem for methods that exploit image search engines to build object category models.
no code implementations • NeurIPS 2009 • Mario Fritz, Gary Bradski, Sergey Karayev, Trevor Darrell, Michael J. Black
The appearance of a transparent patch is determined in part by the refraction of a background pattern through a transparent medium: the energy from the background usually dominates the patch appearance.
no code implementations • NeurIPS 2009 • Brian Kulis, Trevor Darrell
Fast retrieval methods are increasingly critical for many large-scale analysis tasks, and there have been several recent methods that attempt to learn hash functions for fast and accurate nearest neighbor searches.
no code implementations • NeurIPS 2009 • Kate Saenko, Trevor Darrell
When faced with the task of learning a visual model based only on the name of an object, a common approach is to find images on the web that are associated with the object name, and then train a visual classifier from the search result.
no code implementations • NeurIPS 2010 • Yangqing Jia, Mathieu Salzmann, Trevor Darrell
Recent approaches to multi-view learning have shown that factorizing the information into parts that are shared across all views and parts that are private to each view could effectively account for the dependencies and independencies between the different input modalities.
no code implementations • NeurIPS 2010 • Mario Fritz, Kate Saenko, Trevor Darrell
Metric constraints are known to be highly discriminative for many objects, but if training is limited to data captured from a particular 3-D sensor the quantity of training data may be severly limited.
no code implementations • NeurIPS 2011 • Yangqing Jia, Trevor Darrell
Many applications in computer vision measure the similarity between images or image patches based on some statistics such as oriented gradients.
no code implementations • NeurIPS 2012 • Oriol Vinyals, Yangqing Jia, Li Deng, Trevor Darrell
The use of random projections is key to our method, as we show in the experiments section, in which we observe a consistent improvement over previous --often more complicated-- methods on several vision and speech benchmarks.
Ranked #216 on Image Classification on CIFAR-10
no code implementations • NeurIPS 2012 • Sergey Karayev, Tobias Baumgartner, Mario Fritz, Trevor Darrell
On the timeliness measure, our method obtains at least $11\%$ better performance.
no code implementations • 15 Jan 2013 • Judy Hoffman, Erik Rodner, Jeff Donahue, Trevor Darrell, Kate Saenko
We present an algorithm that learns representations which explicitly compensate for domain mismatch and which can be efficiently realized as linear classifiers.
no code implementations • 15 Jan 2013 • Oriol Vinyals, Yangqing Jia, Trevor Darrell
Recently, the computer vision and machine learning community has been in favor of feature extraction pipelines that rely on a coding step followed by a linear classifier, due to their overall simplicity, well understood properties of linear classifiers, and their computational efficiency.
no code implementations • CVPR 2013 • Jeff Donahue, Judy Hoffman, Erik Rodner, Kate Saenko, Trevor Darrell
Most successful object classification and detection methods rely on classifiers trained on large labeled datasets.
no code implementations • 20 Aug 2013 • Erik Rodner, Judy Hoffman, Jeff Donahue, Trevor Darrell, Kate Saenko
Images seen during test time are often not from the same distribution as images used for learning.
8 code implementations • 6 Oct 2013 • Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, Trevor Darrell
We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be re-purposed to novel generic tasks.
29 code implementations • CVPR 2014 • Ross Girshick, Jeff Donahue, Trevor Darrell, Jitendra Malik
We find that R-CNN outperforms OverFeat by a large margin on the 200-class ILSVRC2013 detection dataset.
Ranked #27 on Object Detection on PASCAL VOC 2007 (using extra training data)
1 code implementation • 15 Nov 2013 • Sergey Karayev, Matthew Trentacoste, Helen Han, Aseem Agarwala, Trevor Darrell, Aaron Hertzmann, Holger Winnemoeller
The style of an image plays a significant role in how it is viewed, but style has received little attention in computer vision research.
1 code implementation • CVPR 2014 • Ning Zhang, Manohar Paluri, Marc'Aurelio Ranzato, Trevor Darrell, Lubomir Bourdev
We propose a method for inferring human attributes (such as gender, hair style, clothes style, expression, action) from images of people under large variation of viewpoint, pose, appearance, articulation and occlusion.
Ranked #7 on Facial Attribute Classification on LFWA
no code implementations • 27 Nov 2013 • Ayan Chakrabarti, Ying Xiong, Baochen Sun, Trevor Darrell, Daniel Scharstein, Todd Zickler, Kate Saenko
To produce images that are suitable for display, tone-mapping is widely used in digital cameras to map linear color measurements into narrow gamuts with limited dynamic range.
no code implementations • ICCV 2013 • Ning Zhang, Ryan Farrell, Forrest Iandola, Trevor Darrell
Recognizing objects in fine-grained domains can be extremely challenging due to the subtle differences between subcategories.
Ranked #25 on Fine-Grained Image Classification on CUB-200-2011
no code implementations • NeurIPS 2013 • Yangqing Jia, Joshua T. Abbott, Joseph L. Austerweil, Tom Griffiths, Trevor Darrell
Learning a visual concept from a small number of positive examples is a significant challenge for machine learning algorithms.
no code implementations • 21 Dec 2013 • Judy Hoffman, Eric Tzeng, Jeff Donahue, Yangqing Jia, Kate Saenko, Trevor Darrell
In other words, are deep CNNs trained on large amounts of labeled data as susceptible to dataset bias as previous methods have been shown to be?
no code implementations • 5 Mar 2014 • Hyun Oh Song, Ross Girshick, Stefanie Jegelka, Julien Mairal, Zaid Harchaoui, Trevor Darrell
Learning to localize objects with minimal supervision is an important problem in computer vision, since large fully annotated datasets are extremely costly to obtain.
Ranked #35 on Weakly Supervised Object Detection on PASCAL VOC 2007
2 code implementations • 7 Apr 2014 • Forrest Iandola, Matt Moskewicz, Sergey Karayev, Ross Girshick, Trevor Darrell, Kurt Keutzer
Convolutional Neural Networks (CNNs) can provide accurate object classification.
no code implementations • 28 May 2014 • Tim Althoff, Hyun Oh Song, Trevor Darrell
While low-level image features have proven to be effective representations for visual recognition tasks such as object recognition and scene classification, they are inadequate to capture complex semantic meaning required to solve high-level visual tasks such as multimedia event detection and recognition.
no code implementations • CVPR 2014 • Sergey Karayev, Mario Fritz, Trevor Darrell
On suitable datasets, we can incorporate a semantic back-off strategy that gives maximally specific predictions for a desired level of accuracy; this provides a new view on the time course of human visual perception.
no code implementations • CVPR 2014 • Judy Hoffman, Trevor Darrell, Kate Saenko
The classic domain adaptation paradigm considers the world to be separated into stationary domains with clear boundaries between them.
no code implementations • CVPR 2014 • Jiashi Feng, Stefanie Jegelka, Shuicheng Yan, Trevor Darrell
We use sample relatedness information to improve the generalization of the learned dictionary.
2 code implementations • 20 Jun 2014 • Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, Trevor Darrell
The framework is a BSD-licensed C++ library with Python and MATLAB bindings for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures.
no code implementations • NeurIPS 2014 • Hyun Oh Song, Yong Jae Lee, Stefanie Jegelka, Trevor Darrell
The increasing prominence of weakly labeled data nurtures a growing demand for object detection methods that can cope with minimal supervision.
no code implementations • 15 Jul 2014 • Ning Zhang, Jeff Donahue, Ross Girshick, Trevor Darrell
Semantic part localization can facilitate fine-grained categorization by explicitly isolating subtle appearance differences associated with specific object parts.
Ranked #63 on Fine-Grained Image Classification on CUB-200-2011
1 code implementation • NeurIPS 2014 • Judy Hoffman, Sergio Guadarrama, Eric Tzeng, Ronghang Hu, Jeff Donahue, Ross Girshick, Trevor Darrell, Kate Saenko
A major challenge in scaling object detection is the difficulty of obtaining labeled images for large numbers of categories.
1 code implementation • CVPR 2015 • Ross Girshick, Forrest Iandola, Trevor Darrell, Jitendra Malik
Deformable part models (DPMs) and convolutional neural networks (CNNs) are two widely used tools for visual recognition.
Ranked #28 on Object Detection on PASCAL VOC 2007
1 code implementation • 30 Oct 2014 • Tao Chen, Damian Borth, Trevor Darrell, Shih-Fu Chang
Nearly one million Flickr images tagged with these ANPs are downloaded to train the classifiers of the concepts.
no code implementations • NeurIPS 2014 • Jonathan Long, Ning Zhang, Trevor Darrell
Convolutional neural nets (convnets) trained from massive labeled datasets have substantially improved the state-of-the-art in image classification and object detection.
Ranked #4 on Keypoint Detection on Pascal3D+
53 code implementations • CVPR 2015 • Jonathan Long, Evan Shelhamer, Trevor Darrell
Convolutional networks are powerful visual models that yield hierarchies of features.
Ranked #2 on Semantic Segmentation on SkyScapes-Lane
7 code implementations • CVPR 2015 • Jeff Donahue, Lisa Anne Hendricks, Marcus Rohrbach, Subhashini Venugopalan, Sergio Guadarrama, Kate Saenko, Trevor Darrell
Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or "temporally deep", are effective for tasks involving sequences, visual and otherwise.
Ranked #3 on Human Interaction Recognition on BIT
no code implementations • CVPR 2015 • Judy Hoffman, Deepak Pathak, Trevor Darrell, Kate Saenko
We develop methods for detector learning which exploit joint training over both weak and strong labels and which transfer learned perceptual representations from strongly-labeled auxiliary tasks.
7 code implementations • 10 Dec 2014 • Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, Trevor Darrell
Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark.
Ranked #6 on Domain Adaptation on Office-Caltech
1 code implementation • 22 Dec 2014 • Deepak Pathak, Evan Shelhamer, Jonathan Long, Trevor Darrell
We propose a novel MIL formulation of multi-class semantic segmentation learning by a fully convolutional network.
no code implementations • 22 Dec 2014 • Chelsea Finn, Lisa Anne Hendricks, Trevor Darrell
Recently, nested dropout was proposed as a method for ordering representation units in autoencoders by their information content, without diminishing reconstruction cost.
no code implementations • 2 Apr 2015 • Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel
Policy search methods can allow robots to learn control policies for a wide range of tasks, but practical applications of policy search often require hand-engineered components for perception, state estimation, and low-level control.
4 code implementations • 3 May 2015 • Subhashini Venugopalan, Marcus Rohrbach, Jeff Donahue, Raymond Mooney, Trevor Darrell, Kate Saenko
Our LSTM model is trained on video-sentence pairs and learns to associate a sequence of video frames to a sequence of words in order to generate a description of the event in the video clip.
1 code implementation • ICCV 2015 • Deepak Pathak, Philipp Krähenbühl, Trevor Darrell
We propose Constrained CNN (CCNN), a method which uses a novel loss function to optimize for any set of linear constraints on the output space (i. e. predicted label distribution) of a CNN.
1 code implementation • 21 Sep 2015 • Chelsea Finn, Xin Yu Tan, Yan Duan, Trevor Darrell, Sergey Levine, Pieter Abbeel
Our method uses a deep spatial autoencoder to acquire a set of feature points that describe the environment for the current task, such as the positions of objects, and then learns a motion skill with these feature points using an efficient reinforcement learning method based on local linear models.
1 code implementation • ICCV 2015 • Eric Tzeng, Judy Hoffman, Trevor Darrell, Kate Saenko
Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias.
no code implementations • ICCV 2015 • Damian Mrowca, Marcus Rohrbach, Judy Hoffman, Ronghang Hu, Kate Saenko, Trevor Darrell
Our approach proves to be especially useful in large scale settings with thousands of classes, where spatial and semantic interactions are very frequent and only weakly supervised detectors can be built due to a lack of bounding box annotations.
no code implementations • 16 Oct 2015 • Oscar Beijbom, Judy Hoffman, Evan Yao, Trevor Darrell, Alberto Rodriguez-Ramirez, Manuel Gonzalez-Rivero, Ove Hoegh - Guldberg
Quantification is the task of estimating the class-distribution of a data-set.
1 code implementation • CVPR 2016 • Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein
Visual question answering is fundamentally compositional in nature---a question like "where is the dog?"
Ranked #6 on Visual Question Answering (VQA) on VQA v1 test-std
3 code implementations • 12 Nov 2015 • Anna Rohrbach, Marcus Rohrbach, Ronghang Hu, Trevor Darrell, Bernt Schiele
We propose a novel approach which learns grounding by reconstructing a given phrase using an attention mechanism, which can be either latent or optimized directly.
Ranked #12 on Phrase Grounding on Flickr30k Entities Test
1 code implementation • CVPR 2016 • Ronghang Hu, Huazhe Xu, Marcus Rohrbach, Jiashi Feng, Kate Saenko, Trevor Darrell
In this paper, we address the task of natural language object retrieval, to localize a target object within a given image based on a natural language query of the object.
Ranked #12 on Referring Expression Comprehension on Talk2Car
1 code implementation • CVPR 2016 • Lisa Anne Hendricks, Subhashini Venugopalan, Marcus Rohrbach, Raymond Mooney, Kate Saenko, Trevor Darrell
Current deep caption models can only describe objects contained in paired image-sentence corpora, despite the fact that they are pre-trained with large object recognition datasets, namely ImageNet.
no code implementations • 19 Nov 2015 • Yang Gao, Lisa Anne Hendricks, Katherine J. Kuchenbecker, Trevor Darrell
Robots which interact with the physical world will benefit from a fine-grained tactile understanding of objects and surfaces.
6 code implementations • CVPR 2016 • Yang Gao, Oscar Beijbom, Ning Zhang, Trevor Darrell
Bilinear models has been shown to achieve impressive performance on a wide range of visual tasks, such as semantic segmentation, fine grained recognition and face recognition.
2 code implementations • 21 Nov 2015 • Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell
Convolutional Neural Networks spread through computer vision like a wildfire, impacting almost all visual tasks imaginable.
no code implementations • 21 Nov 2015 • Takuya Narihira, Damian Borth, Stella X. Yu, Karl Ni, Trevor Darrell
We consider the visual sentiment task of mapping an image to an adjective noun pair (ANP) such as "cute baby".
no code implementations • 22 Nov 2015 • Ning Zhang, Evan Shelhamer, Yang Gao, Trevor Darrell
Pose variation and subtle differences in appearance are key challenges to fine-grained classification.
no code implementations • 22 Nov 2015 • Samaneh Azadi, Jiashi Feng, Stefanie Jegelka, Trevor Darrell
Precisely-labeled data sets with sufficient amount of samples are very important for training deep convolutional neural networks (CNNs).
no code implementations • 23 Nov 2015 • Eric Tzeng, Coline Devin, Judy Hoffman, Chelsea Finn, Pieter Abbeel, Sergey Levine, Kate Saenko, Trevor Darrell
We propose a novel, more powerful combination of both distribution and pairwise image alignment, and remove the requirement for expensive annotation by using weakly aligned pairs of images in the source and target domains.
no code implementations • 23 Nov 2015 • Deepak Pathak, Philipp Krähenbühl, Stella X. Yu, Trevor Darrell
We present a regression framework which models the output distribution of neural networks.
no code implementations • ICCV 2015 • Subhashini Venugopalan, Marcus Rohrbach, Jeffrey Donahue, Raymond Mooney, Trevor Darrell, Kate Saenko
Our LSTM model is trained on video-sentence pairs and learns to associate a sequence of video frames to a sequence of words in order to generate a description of the event in the video clip.
no code implementations • ICCV 2015 • Jiashi Feng, Trevor Darrell
In this work, we develop a novel method for automatically learning aspects of the structure of a deep model, in order to improve its performance, especially when labeled training data are scarce.
3 code implementations • NAACL 2016 • Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein
We describe a question answering model that applies to both images and structured knowledge bases.
4 code implementations • 20 Mar 2016 • Ronghang Hu, Marcus Rohrbach, Trevor Darrell
To produce pixelwise segmentation for the language expression, we propose an end-to-end trainable recurrent and convolutional network model that jointly learns to process visual and linguistic information.
Ranked #16 on Referring Expression Segmentation on J-HMDB
no code implementations • 28 Mar 2016 • Lisa Anne Hendricks, Zeynep Akata, Marcus Rohrbach, Jeff Donahue, Bernt Schiele, Trevor Darrell
Clearly explaining a rationale for a classification decision to an end-user can be as important as the decision itself.
11 code implementations • CVPR 2016 • Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, Alexei A. Efros
In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s).
40 code implementations • CVPR 2015 • Evan Shelhamer, Jonathan Long, Trevor Darrell
Convolutional networks are powerful visual models that yield hierarchies of features.
Ranked #2 on Semantic Segmentation on NYU Depth v2 (Mean Accuracy metric)
10 code implementations • 31 May 2016 • Jeff Donahue, Philipp Krähenbühl, Trevor Darrell
The ability of the Generative Adversarial Networks (GANs) framework to learn generative models mapping from simple latent distributions to arbitrarily complex data distributions has been demonstrated empirically, with compelling results showing that the latent space of such generators captures semantic variation in the data distribution.
no code implementations • CVPR 2016 • Judy Hoffman, Saurabh Gupta, Trevor Darrell
Thus, our method transfers information commonly extracted from depth training data to a network which can extract that information from the RGB counterpart.
10 code implementations • EMNLP 2016 • Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, Marcus Rohrbach
Approaches to multimodal pooling include element-wise product or sum, as well as concatenation of the visual and textual representations.
1 code implementation • CVPR 2017 • Subhashini Venugopalan, Lisa Anne Hendricks, Marcus Rohrbach, Raymond Mooney, Trevor Darrell, Kate Saenko
We propose minimizing a joint objective which can learn from these diverse data sources and leverage distributional semantic embeddings, enabling the model to generalize and describe novel objects outside of image-caption datasets.
1 code implementation • 11 Aug 2016 • Evan Shelhamer, Kate Rakelly, Judy Hoffman, Trevor Darrell
Recent years have seen tremendous progress in still-image segmentation; however the na\"ive application of these state-of-the-art algorithms to every video frame requires considerable computation and ignores the temporal continuity inherent in video.
no code implementations • 30 Aug 2016 • Ronghang Hu, Marcus Rohrbach, Subhashini Venugopalan, Trevor Darrell
Image segmentation from referring expressions is a joint vision and language modeling task, where the input is an image and a textual expression describing a particular region in the image; and the goal is to localize and segment the specific image region based on the given expression.
no code implementations • 22 Sep 2016 • Coline Devin, Abhishek Gupta, Trevor Darrell, Pieter Abbeel, Sergey Levine
Using deep reinforcement learning to train general purpose neural network policies alleviates some of the burden of manual representation engineering by using expressive policy classes, but exacerbates the challenge of data collection, since such methods tend to be less efficient than RL with low-dimensional, hand-designed representations.
2 code implementations • CVPR 2017 • Ronghang Hu, Marcus Rohrbach, Jacob Andreas, Trevor Darrell, Kate Saenko
In this paper we instead present a modular deep architecture capable of analyzing referential expressions into their component parts, identifying entities and relationships mentioned in the input expression and grounding them all in the scene.
Ranked #1 on Visual Question Answering (VQA) on Visual7W
2 code implementations • CVPR 2017 • Huazhe Xu, Yang Gao, Fisher Yu, Trevor Darrell
Robust perception-action models should be learned from training data with diverse visual appearances and realistic behaviors, yet current approaches to deep visuomotor policy learning have been generally limited to in-situ models learned from a single vehicle or a simulation environment.
3 code implementations • 8 Dec 2016 • Judy Hoffman, Dequan Wang, Fisher Yu, Trevor Darrell
In this paper, we introduce the first domain adaptive semantic segmentation method, proposing an unsupervised adversarial approach to pixel prediction problems.
Ranked #2 on Image-to-Image Translation on SYNTHIA Fall-to-Winter
no code implementations • 14 Dec 2016 • Dong Huk Park, Lisa Anne Hendricks, Zeynep Akata, Bernt Schiele, Trevor Darrell, Marcus Rohrbach
In contrast, humans can justify their decisions with natural language and point to the evidence in the visual world which led to their decisions.
1 code implementation • CVPR 2017 • Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan
Given the extensive evidence that motion plays a key role in the development of the human visual system, we hope that this straightforward approach to unsupervised learning will be more effective than cleverly designed 'pretext' tasks studied in the literature.
no code implementations • 21 Dec 2016 • Evan Shelhamer, Parsa Mahmoudieh, Max Argus, Trevor Darrell
Reinforcement learning optimizes policies for expected cumulative reward.
no code implementations • 15 Feb 2017 • Andrew Zhai, Dmitry Kislyuk, Yushi Jing, Michael Feng, Eric Tzeng, Jeff Donahue, Yue Li Du, Trevor Darrell
Over the past three years Pinterest has experimented with several visual search and recommendation services, including Related Pins (2014), Similar Looks (2015), Flashlight (2016) and Lens (2017).
20 code implementations • CVPR 2017 • Eric Tzeng, Judy Hoffman, Kate Saenko, Trevor Darrell
Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains.
1 code implementation • CVPR 2017 • Samaneh Azadi, Jiashi Feng, Trevor Darrell
To predict a set of diverse and informative proposals with enriched representations, this paper introduces a differentiable Determinantal Point Process (DPP) layer that is able to augment the object detection architectures.
1 code implementation • ICCV 2017 • Ronghang Hu, Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Kate Saenko
Natural language questions are inherently compositional, and many are most easily answered by reasoning about their decomposition into modular sub-problems.
Ranked #43 on Visual Question Answering (VQA) on VQA v2 test-dev
2 code implementations • ICCV 2017 • Marcel Simon, Yang Gao, Trevor Darrell, Joachim Denzler, Erik Rodner
In this paper, we generalize average and bilinear pooling to "alpha-pooling", allowing for learning the pooling strategy during training.
14 code implementations • ICML 2017 • Deepak Pathak, Pulkit Agrawal, Alexei A. Efros, Trevor Darrell
In many real-world scenarios, rewards extrinsic to the agent are extremely sparse, or absent altogether.
7 code implementations • CVPR 2018 • Fisher Yu, Dequan Wang, Evan Shelhamer, Trevor Darrell
We augment standard architectures with deeper aggregation to better fuse information across layers.
2 code implementations • ICCV 2017 • Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, Bryan Russell
A key obstacle to training our MCN model is that current video datasets do not include pairs of localized video segments and referring expressions, or text descriptions which uniquely identify a corresponding moment.
1 code implementation • 14 Aug 2017 • Coline Devin, Pieter Abbeel, Trevor Darrell, Sergey Levine
We devise an object-level attentional mechanism that can be used to determine relevant objects from a few trajectories or demonstrations, and then immediately incorporate those objects into a learned policy.
no code implementations • CVPR 2018 • Xiaojun Xu, Xinyun Chen, Chang Liu, Anna Rohrbach, Trevor Darrell, Dawn Song
Our work sheds new light on understanding adversarial attacks on vision systems which have a language component and shows that attention, bounding box localization, and compositional internal structures are vulnerable to adversarial attacks.
no code implementations • 16 Oct 2017 • Sayna Ebrahimi, Anna Rohrbach, Trevor Darrell
We develop a method for policy architecture search and adaptation via gradient-free optimization which can learn to perform autonomous driving tasks.
3 code implementations • ICML 2018 • Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei A. Efros, Trevor Darrell
Domain adaptation is critical for success in new, unseen environments.
no code implementations • 17 Nov 2017 • Dong Huk Park, Lisa Anne Hendricks, Zeynep Akata, Anna Rohrbach, Bernt Schiele, Trevor Darrell, Marcus Rohrbach
We also introduce a multimodal methodology for generating visual and textual explanations simultaneously.
no code implementations • 17 Nov 2017 • Lisa Anne Hendricks, Ronghang Hu, Trevor Darrell, Zeynep Akata
Existing models which generate textual explanations enforce task relevance through a discriminative term loss function, but such mechanisms only weakly constrain mentioned object parts to actually be present in the image.
2 code implementations • ECCV 2018 • Xin Wang, Fisher Yu, Zi-Yi Dou, Trevor Darrell, Joseph E. Gonzalez
While deeper convolutional networks are needed to achieve maximum accuracy in visual perception tasks, for many inputs shallower networks are sufficient.
3 code implementations • CVPR 2018 • Ronghang Hu, Piotr Dollár, Kaiming He, Trevor Darrell, Ross Girshick
Most methods for object instance segmentation require all training examples to be labeled with segmentation masks.
6 code implementations • NeurIPS 2017 • Jun-Yan Zhu, Richard Zhang, Deepak Pathak, Trevor Darrell, Alexei A. Efros, Oliver Wang, Eli Shechtman
Our proposed method encourages bijective consistency between the latent encoding and output modes.
6 code implementations • CVPR 2018 • Samaneh Azadi, Matthew Fisher, Vladimir Kim, Zhaowen Wang, Eli Shechtman, Trevor Darrell
In this work, we focus on the challenge of taking partial observations of highly-stylized text and generalizing the observations to generate unobserved glyphs in the ornamented typeface.
no code implementations • ICLR 2018 • Erin Grant, Chelsea Finn, Sergey Levine, Trevor Darrell, Thomas Griffiths
Meta-learning allows an intelligent agent to leverage prior learning episodes as a basis for quickly improving performance on a novel task.
no code implementations • ICLR 2018 • Yang Gao, Huazhe Xu, Ji Lin, Fisher Yu, Sergey Levine, Trevor Darrell
We propose a unified reinforcement learning algorithm, Normalized Actor-Critic (NAC), that effectively normalizes the Q-function, reducing the Q-values of actions unseen in the demonstration data.
1 code implementation • CVPR 2018 • Dong Huk Park, Lisa Anne Hendricks, Zeynep Akata, Anna Rohrbach, Bernt Schiele, Trevor Darrell, Marcus Rohrbach
We propose a multimodal approach to explanation, and argue that the two modalities provide complementary explanatory strengths.
2 code implementations • ECCV 2018 • Kaylee Burns, Lisa Anne Hendricks, Kate Saenko, Trevor Darrell, Anna Rohrbach
We introduce a new Equalizer model that ensures equal gender probability when gender evidence is occluded in a scene and confident predictions when gender evidence is present.
1 code implementation • ICLR 2018 • Deepak Pathak, Parsa Mahmoudieh, Guanghao Luo, Pulkit Agrawal, Dian Chen, Yide Shentu, Evan Shelhamer, Jitendra Malik, Alexei A. Efros, Trevor Darrell
In our framework, the role of the expert is only to communicate the goals (i. e., what to imitate) during inference.
4 code implementations • CVPR 2020 • Fisher Yu, Haofeng Chen, Xin Wang, Wenqi Xian, Yingying Chen, Fangchen Liu, Vashisht Madhavan, Trevor Darrell
Datasets drive vision progress, yet existing driving datasets are impoverished in terms of visual content and supported tasks to study multitask learning for autonomous driving.
Ranked #5 on Multiple Object Tracking on BDD100K test
1 code implementation • 25 May 2018 • Kate Rakelly, Evan Shelhamer, Trevor Darrell, Alexei A. Efros, Sergey Levine
Learning-based methods for visual segmentation have made progress on particular types of segmentation tasks, but are limited by the necessary supervision, the narrow definitions of fixed tasks, and the lack of control during inference for correcting errors.
no code implementations • 5 Jun 2018 • Xin Wang, Fisher Yu, Lisa Dunlap, Yi-An Ma, Ruth Wang, Azalia Mirhoseini, Trevor Darrell, Joseph E. Gonzalez
Larger networks generally have greater representational power at the cost of increased computational complexity.
1 code implementation • NeurIPS 2018 • Daniel Fried, Ronghang Hu, Volkan Cirik, Anna Rohrbach, Jacob Andreas, Louis-Philippe Morency, Taylor Berg-Kirkpatrick, Kate Saenko, Dan Klein, Trevor Darrell
We use this speaker model to (1) synthesize new instructions for data augmentation and to (2) implement pragmatic reasoning, which evaluates how well candidate action sequences explain an instruction.
1 code implementation • 21 Jun 2018 • Deepak Pathak, Yide Shentu, Dian Chen, Pulkit Agrawal, Trevor Darrell, Sergey Levine, Jitendra Malik
The agent uses its current segmentation model to infer pixels that constitute objects and refines the segmentation model by interacting with these pixels.
no code implementations • 26 Jun 2018 • Lisa Anne Hendricks, Ronghang Hu, Trevor Darrell, Zeynep Akata
We call such textual explanations counterfactual explanations, and propose an intuitive method to generate counterfactual explanations by inspecting which evidence in an input is missing, but might contribute to a different classification decision if present in the image.
no code implementations • 2 Jul 2018 • Lisa Anne Hendricks, Kaylee Burns, Kate Saenko, Trevor Darrell, Anna Rohrbach
Most machine learning methods are known to capture and exploit biases of the training data.
2 code implementations • ICLR 2019 • Yuping Luo, Huazhe Xu, Yuanzhi Li, Yuandong Tian, Trevor Darrell, Tengyu Ma
Model-based reinforcement learning (RL) is considered to be a promising approach to reduce the sample complexity that hinders model-free RL.
1 code implementation • 19 Jul 2018 • Samaneh Azadi, Deepak Pathak, Sayna Ebrahimi, Trevor Darrell
Generative Adversarial Networks (GANs) can produce images of remarkable complexity and realism but are generally structured to sample from a single latent source ignoring the explicit spatial interaction between multiple entities that could be present in a scene.
1 code implementation • ECCV 2018 • Ronghang Hu, Jacob Andreas, Trevor Darrell, Kate Saenko
In complex inferential tasks like question answering, machine learning models must confront two challenges: the need to implement a compositional reasoning process, and, in many applications, the need for this reasoning process to be interpretable to assist users in both development and prediction.
Ranked #14 on Referring Expression Comprehension on Talk2Car
no code implementations • ECCV 2018 • Lisa Anne Hendricks, Ronghang Hu, Trevor Darrell, Zeynep Akata
Our model improves the textual explanation quality of fine-grained classification decisions on the CUB dataset by mentioning phrases that are grounded in the image.
2 code implementations • ECCV 2018 • Jinkyu Kim, Anna Rohrbach, Trevor Darrell, John Canny, Zeynep Akata
Finally, we explore a version of our model that generates rationalizations, and compare with introspective explanations on the same video segments.
4 code implementations • ICLR 2019 • Yuri Burda, Harri Edwards, Deepak Pathak, Amos Storkey, Trevor Darrell, Alexei A. Efros
However, annotating each environment with hand-designed, dense rewards is not scalable, motivating the need for developing reward functions that are intrinsic to the agent.
Ranked #14 on Atari Games on Atari 2600 Montezuma's Revenge
1 code implementation • EMNLP 2018 • Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, Bryan Russell
To benchmark whether our model, and other recent video localization models, can effectively reason about temporal language, we collect the novel TEMPOral reasoning in video and language (TEMPO) dataset.
1 code implementation • EMNLP 2018 • Anna Rohrbach, Lisa Anne Hendricks, Kaylee Burns, Trevor Darrell, Kate Saenko
Despite continuously improving performance, contemporary image captioning models are prone to "hallucinating" objects that are not actually in a scene.
no code implementations • 27 Sep 2018 • Sayna Ebrahimi, Mohamed Elhoseiny, Trevor Darrell, Marcus Rohrbach
Sequentially learning of tasks arriving in a continuous stream is a complex problem and becomes more challenging when the model has a fixed capacity.
2 code implementations • ICLR 2019 • Zhuang Liu, Ming-Jie Sun, Tinghui Zhou, Gao Huang, Trevor Darrell
Our observations are consistent for multiple network architectures, datasets, and tasks, which imply that: 1) training a large, over-parameterized model is often not necessary to obtain an efficient final model, 2) learned "important" weights of the large model are typically not useful for the small pruned model, 3) the pruned architecture itself, rather than a set of inherited "important" weights, is more crucial to the efficiency in the final model, which suggests that in some cases pruning can be useful as an architecture search paradigm.
1 code implementation • ICLR 2019 • Samaneh Azadi, Catherine Olsson, Trevor Darrell, Ian Goodfellow, Augustus Odena
We propose a rejection sampling scheme using the discriminator of a GAN to approximately correct errors in the GAN generator distribution.
no code implementations • 8 Nov 2018 • Dennis Lee, Haoran Tang, Jeffrey O. Zhang, Huazhe Xu, Trevor Darrell, Pieter Abbeel
We present a novel modular architecture for StarCraft II AI.
no code implementations • 13 Nov 2018 • Dequan Wang, Coline Devin, Qi-Zhi Cai, Fisher Yu, Trevor Darrell
While learning visuomotor skills in an end-to-end manner is appealing, deep neural networks are often uninterpretable and fail in surprising ways.
1 code implementation • ICCV 2019 • Hou-Ning Hu, Qi-Zhi Cai, Dequan Wang, Ji Lin, Min Sun, Philipp Krähenbühl, Trevor Darrell, Fisher Yu
The framework can not only associate detections of vehicles in motion over time, but also estimate their complete 3D bounding box information from a sequence of 2D images captured on a moving platform.
Ranked #12 on Multiple Object Tracking on KITTI Tracking test
1 code implementation • ICCV 2019 • Hang Gao, Huazhe Xu, Qi-Zhi Cai, Ruth Wang, Fisher Yu, Trevor Darrell
A dynamic scene has two types of elements: those that move fluidly and can be predicted from previous frames, and those which are disoccluded (exposed) and cannot be extrapolated.
no code implementations • 3 Dec 2018 • Eric Tzeng, Kaylee Burns, Kate Saenko, Trevor Darrell
Without dense labels, as is the case when only detection labels are available in the source, transformations are learned using CycleGAN alignment.
1 code implementation • 4 Dec 2018 • Roei Herzig, Elad Levi, Huijuan Xu, Hang Gao, Eli Brosh, Xiaolong Wang, Amir Globerson, Trevor Darrell
Events defined by the interaction of objects in a scene are often of critical importance; yet important events may have insufficient labeled examples to train a conventional deep model to generalize to future object appearance.
4 code implementations • ICCV 2019 • Bingyi Kang, Zhuang Liu, Xin Wang, Fisher Yu, Jiashi Feng, Trevor Darrell
The feature learner extracts meta features that are generalizable to detect novel object classes, using training data from base classes with sufficient samples.
Ranked #21 on Few-Shot Object Detection on MS-COCO (30-shot)
2 code implementations • 5 Dec 2018 • Edgar Schönfeld, Sayna Ebrahimi, Samarth Sinha, Trevor Darrell, Zeynep Akata
Many approaches in generalized zero-shot learning rely on cross-modal mapping between the image feature space and the class embedding space.
Ranked #2 on Generalized Few-Shot Learning on AwA2
1 code implementation • CVPR 2019 • Jae Sung Park, Marcus Rohrbach, Trevor Darrell, Anna Rohrbach
Among the main issues are the fluency and coherence of the generated descriptions, and their relevance to the video.
2 code implementations • CVPR 2019 • Zhichao Yin, Trevor Darrell, Fisher Yu
Explicit representations of the global match distributions of pixel-wise correspondences between pairs of images are desirable for uncertainty estimation and downstream applications.
Ranked #13 on Optical Flow Estimation on KITTI 2015 (train)
no code implementations • 25 Dec 2018 • Huijuan Xu, Bingyi Kang, Ximeng Sun, Jiashi Feng, Kate Saenko, Trevor Darrell
In this paper, we present a conceptually simple and general yet novel framework for few-shot temporal activity detection which detects the start and end time of the few-shot input activities in an untrimmed video.
1 code implementation • ICCV 2019 • Dong Huk Park, Trevor Darrell, Anna Rohrbach
We present a novel Dual Dynamic Attention Model (DUDA) to perform robust Change Captioning.
1 code implementation • NeurIPS 2019 • Deepak Pathak, Chris Lu, Trevor Darrell, Phillip Isola, Alexei A. Efros
We evaluate the performance of these dynamic and modular agents in simulated environments.
no code implementations • ICLR Workshop LLD 2019 • Evan Shelhamer, Dequan Wang, Trevor Darrell
The visual world is vast and varied, but its variations divide into structured and unstructured factors.
no code implementations • ICLR Workshop LLD 2019 • Edgar Schönfeld, Sayna Ebrahimi, Samarth Sinha, Trevor Darrell, Zeynep Akata
While following the same direction, we also take artificial feature generation one step further and propose a model where a shared latent space of image features and class embeddings is learned by aligned variational autoencoders, for the purpose of generating latent features to train a softmax classifier.
no code implementations • ICLR Workshop DeepGenStruct 2019 • Samaneh Azadi, Deepak Pathak, Sayna Ebrahimi, Trevor Darrell
Generative Adversarial Networks (GANs) can produce images of surprising complexity and realism but are generally structured to sample from a single latent source ignoring the explicit spatial interaction between multiple entities that could be present in a scene.
6 code implementations • ICCV 2019 • Samarth Sinha, Sayna Ebrahimi, Trevor Darrell
Unlike conventional active learning algorithms, our approach is task agnostic, i. e., it does not depend on the performance of the task for which we are trying to acquire labeled data.
1 code implementation • CVPR 2019 • Xin Wang, Fisher Yu, Ruth Wang, Trevor Darrell, Joseph E. Gonzalez
We show that TAFE-Net is highly effective in generalizing to new tasks or concepts and evaluate the TAFE-Net on a range of benchmarks in zero-shot and few-shot learning.
Ranked #1 on Few-Shot Image Classification on aPY - 0-Shot
3 code implementations • ICCV 2019 • Kuniaki Saito, Donghyun Kim, Stan Sclaroff, Trevor Darrell, Kate Saenko
Contemporary domain adaptation methods are very effective at aligning feature distributions of source and target domains without any target supervision.
no code implementations • 25 Apr 2019 • Evan Shelhamer, Dequan Wang, Trevor Darrell
Adapting receptive fields by dynamic Gaussian structure further improves results, equaling the accuracy of free-form deformation while improving efficiency.
no code implementations • ICLR 2019 • Kate Rakelly*, Evan Shelhamer*, Trevor Darrell, Alexei A. Efros, Sergey Levine
To explore generalization, we analyze guidance as a bridge between different levels of supervision to segment classes as the union of instances.
1 code implementation • 1 May 2019 • Eli Brosh, Matan Friedmann, Ilan Kadar, Lev Yitzhak Lavy, Elad Levi, Shmuel Rippa, Yair Lempert, Bruno Fernandez-Ruiz, Roei Herzig, Trevor Darrell
We propose a hybrid coarse-to-fine approach that leverages visual and GPS location cues.
1 code implementation • ICCV 2019 • Ronghang Hu, Anna Rohrbach, Trevor Darrell, Kate Saenko
E. g., conditioning on the "on" relationship to the plate, the object "mug" gathers messages from the object "plate" to update its representation to "mug on the plate", which can be easily consumed by a simple classifier for answer prediction.
Ranked #3 on Referring Expression Comprehension on CLEVR-Ref+
no code implementations • 16 May 2019 • Dequan Wang, Coline Devin, Qi-Zhi Cai, Philipp Krähenbühl, Trevor Darrell
Convolutions on monocular dash cam videos capture spatial invariances in the image plane but do not explicitly reason about distances and depth.
no code implementations • ACL 2019 • Ronghang Hu, Daniel Fried, Anna Rohrbach, Dan Klein, Trevor Darrell, Kate Saenko
The actual grounding can connect language to the environment through multiple modalities, e. g. "stop at the door" might ground into visual objects, while "turn right" might rely only on the geometric structure of a route.
2 code implementations • ICLR 2020 • Sayna Ebrahimi, Mohamed Elhoseiny, Trevor Darrell, Marcus Rohrbach
Continual learning aims to learn new tasks without forgetting previously learned ones.
1 code implementation • 11 Jun 2019 • Xin Wang, Fisher Yu, Trevor Darrell, Joseph E. Gonzalez
In this work, we propose a task-aware feature generation (TFG) framework for compositional learning, which generates features of novel visual concepts by transferring knowledge from previously seen concepts.
no code implementations • 8 Aug 2019 • Dequan Wang, Evan Shelhamer, Bruno Olshausen, Trevor Darrell
Given the variety of the visual world there is not one true scale for recognition: objects may appear at drastically different sizes across the visual field.
no code implementations • 25 Sep 2019 • Parsa Mahmoudieh, Trevor Darrell, Deepak Pathak
Instead of direct manual supervision which is tedious and prone to bias, in this work, our goal is to extract reusable skills from a collection of human demonstrations collected directly for several end-tasks.
no code implementations • 25 Sep 2019 • Evan Shelhamer, Dequan Wang, Trevor Darrell
Adapting receptive fields by dynamic Gaussian structure further improves results, equaling the accuracy of free-form deformation while improving efficiency.
no code implementations • 25 Sep 2019 • Jingwei Xu, Huazhe Xu, Bingbing Ni, Xiaokang Yang, Trevor Darrell
Learning diverse and natural behaviors is one of the longstanding goal for creating intelligent characters in the animated world.
no code implementations • 25 Sep 2019 • Huazhe Xu, Boyuan Chen, Yang Gao, Trevor Darrell
In this paper, we propose Scoring-Aggregating-Planning (SAP), a framework that can learn task-agnostic semantics and dynamics priors from arbitrary quality interactions as well as the corresponding sparse rewards and then plan on unseen tasks in zero-shot condition.
3 code implementations • 26 Sep 2019 • Yu Sun, Eric Tzeng, Trevor Darrell, Alexei A. Efros
This paper addresses unsupervised domain adaptation, the setting where labeled training data is available on a source domain, but the goal is to have good performance on a target domain with only unlabeled data.
1 code implementation • 17 Oct 2019 • Huazhe Xu, Boyuan Chen, Yang Gao, Trevor Darrell
The agent is first presented with previous experiences in the training environment, along with task description in the form of trajectory-level sparse rewards.
2 code implementations • 21 Oct 2019 • Zhuang Liu, Xuanlin Li, Bingyi Kang, Trevor Darrell
In this work, we present the first comprehensive study of regularization techniques with multiple policy optimization algorithms on continuous control tasks.
1 code implementation • 21 Oct 2019 • Zhuang Liu, Hung-Ju Wang, Tinghui Zhou, Zhiqiang Shen, Bingyi Kang, Evan Shelhamer, Trevor Darrell
Interestingly, the processing model's ability to enhance recognition quality can transfer when evaluated on models of different architectures, recognized categories, tasks and training datasets.
no code implementations • 30 Oct 2019 • Coline Devin, Daniel Geng, Pieter Abbeel, Trevor Darrell, Sergey Levine
We show that CPVs can be learned within a one-shot imitation learning framework without any additional supervision or information about task hierarchy, and enable a demonstration-conditioned policy to generalize to tasks that sequence twice as many skills as the tasks seen during training.
1 code implementation • CVPR 2020 • Ronghang Hu, Amanpreet Singh, Trevor Darrell, Marcus Rohrbach
Recent work has explored the TextVQA task that requires reading and understanding text in images to answer a question.
2 code implementations • 26 Nov 2019 • Samaneh Azadi, Michael Tschannen, Eric Tzeng, Sylvain Gelly, Trevor Darrell, Mario Lucic
For the former, we use an unconditional progressive segmentation generation network that captures the distribution of realistic semantic scene layouts.
Ranked #1 on Image Generation on Cityscapes-5K 256x512
1 code implementation • NeurIPS 2019 • Coline Devin, Daniel Geng, Pieter Abbeel, Trevor Darrell, Sergey Levine
We show that CPVs can be learned within a one-shot imitation learning framework without any additional supervision or information about task hierarchy, and enable a demonstration-conditioned policy to generalize to tasks that sequence twice as many skills as the tasks seen during training.
2 code implementations • ECCV 2020 • Roei Herzig, Amir Bar, Huijuan Xu, Gal Chechik, Trevor Darrell, Amir Globerson
Generating realistic images of complex visual scenes becomes challenging when one wishes to control the structure of the generated images.
Ranked #3 on Layout-to-Image Generation on Visual Genome 256x256
1 code implementation • CVPR 2020 • Joanna Materzynska, Tete Xiao, Roei Herzig, Huijuan Xu, Xiaolong Wang, Trevor Darrell
Human action is naturally compositional: humans can easily recognize and perform actions with objects that are different from those used in training demonstrations.
1 code implementation • 23 Dec 2019 • Richard Li, Allan Jabri, Trevor Darrell, Pulkit Agrawal
Learning robotic manipulation tasks using reinforcement learning with sparse rewards is currently impractical due to the outrageous data requirements.
10 code implementations • ICCV 2021 • Yinbo Chen, Zhuang Liu, Huijuan Xu, Trevor Darrell, Xiaolong Wang
The edge between these two lines of works has yet been underexplored, and the effectiveness of meta-learning in few-shot learning remains unclear.
3 code implementations • 11 Mar 2020 • Zhiqiang Shen, Zechun Liu, Zhuang Liu, Marios Savvides, Trevor Darrell, Eric Xing
This drawback hinders the model from learning subtle variance and fine-grained information.
4 code implementations • ICML 2020 • Xin Wang, Thomas E. Huang, Trevor Darrell, Joseph E. Gonzalez, Fisher Yu
Such a simple approach outperforms the meta-learning methods by roughly 2~20 points on current benchmarks and sometimes even doubles the accuracy of the prior methods.
Ranked #17 on Few-Shot Object Detection on MS-COCO (30-shot)
1 code implementation • ECCV 2020 • Sayna Ebrahimi, Franziska Meier, Roberto Calandra, Trevor Darrell, Marcus Rohrbach
We show that shared features are significantly less prone to forgetting and propose a novel hybrid continual learning framework that learns a disjoint representation for task-invariant and task-specific features required to solve a sequence of tasks.
no code implementations • 31 Mar 2020 • Huijuan Xu, Ximeng Sun, Eric Tzeng, Abir Das, Kate Saenko, Trevor Darrell
In this paper, we present a conceptually simple and general yet novel framework for few-shot temporal activity detection based on proposal regression which detects the start and end time of the activities in untrimmed videos.
1 code implementation • ECCV 2020 • Zhekun Luo, Devin Guillory, Baifeng Shi, Wei Ke, Fang Wan, Trevor Darrell, Huijuan Xu
Weakly-supervised action localization requires training a model to localize the action segments in the video given only video level action label.
Ranked #9 on Weakly Supervised Action Localization on THUMOS’14
no code implementations • 1 Apr 2020 • Huijuan Xu, Lizhi Yang, Stan Sclaroff, Kate Saenko, Trevor Darrell
Spatio-temporal action detection in videos requires localizing the action both spatially and temporally in the form of an "action tube".
no code implementations • 14 Apr 2020 • Viktoriia Sharmanska, Lisa Anne Hendricks, Trevor Darrell, Novi Quadrianto
Computer vision algorithms, e. g. for face recognition, favour groups of individuals that are better represented in the training data.
no code implementations • 21 Apr 2020 • Xu Shen, Ivo Batkovic, Vijay Govindarajan, Paolo Falcone, Trevor Darrell, Francesco Borrelli
We investigate the problem of predicting driver behavior in parking lots, an environment which is less structured than typical road networks and features complex, interactive maneuvers in a compact space.
no code implementations • ICCV 2021 • Elad Levi, Tete Xiao, Xiaolong Wang, Trevor Darrell
We theoretically prove and empirically show that under reasonable noise assumptions, margin-based losses tend to project all samples of a class with various modes onto a single point in the embedding space, resulting in a class collapse that usually renders the space ill-sorted for classification or retrieval.
3 code implementations • CVPR 2021 • Jiangmiao Pang, Linlu Qiu, Xia Li, Haofeng Chen, Qi Li, Trevor Darrell, Fisher Yu
Compared to methods with similar detectors, it boosts almost 10 points of MOTA and significantly decreases the number of ID switches on BDD100K and Waymo datasets.
Ranked #1 on One-Shot Object Detection on PASCAL VOC 2012 val
2 code implementations • ICLR 2021 • Dequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, Trevor Darrell
A model must adapt itself to generalize to new and different data during testing.
1 code implementation • 27 Jun 2020 • Amir Bar, Roei Herzig, Xiaolong Wang, Anna Rohrbach, Gal Chechik, Trevor Darrell, Amir Globerson
Our generative model for this task (AG2Vid) disentangles motion and appearance features, and by incorporating a scheduling mechanism for actions facilitates a timely and coordinated video generation.
1 code implementation • ICML 2020 • Jingwei Xu, Huazhe Xu, Bingbing Ni, Xiaokang Yang, Trevor Darrell
In video prediction tasks, one major challenge is to capture the multi-modal nature of future contents and dynamics.
no code implementations • ECCV 2020 • Medhini Narasimhan, Erik Wijmans, Xinlei Chen, Trevor Darrell, Dhruv Batra, Devi Parikh, Amanpreet Singh
We also demonstrate that reducing the task of room navigation to point navigation improves the performance further.
1 code implementation • CVPR 2021 • Evonne Ng, Shiry Ginosar, Trevor Darrell, Hanbyul Joo
We demonstrate the efficacy of our method on hand gesture synthesis from body motion input, and as a strong body prior for single-view image-based 3D hand pose estimation.
no code implementations • ICLR 2021 • Tete Xiao, Xiaolong Wang, Alexei A. Efros, Trevor Darrell
Recent self-supervised contrastive methods have been able to produce impressive transferable visual representations by learning to be invariant to different data augmentations.
1 code implementation • ECCV 2020 • Jae Sung Park, Trevor Darrell, Anna Rohrbach
This auxiliary task allows us to propose a two-stage approach to Identity-Aware Video Description.
no code implementations • ECCV 2020 • Jingwei Xu, Huazhe Xu, Bingbing Ni, Xiaokang Yang, Xiaolong Wang, Trevor Darrell
Generating diverse and natural human motion is one of the long-standing goals for creating intelligent characters in the animated world.
no code implementations • 7 Sep 2020 • Sicheng Zhao, Yezhen Wang, Bo Li, Bichen Wu, Yang Gao, Pengfei Xu, Trevor Darrell, Kurt Keutzer
They require prior knowledge of real-world statistics and ignore the pixel-level dropout noise gap and the spatial feature gap between different domains.
1 code implementation • CVPR 2021 • Colorado J Reed, Sean Metzger, Aravind Srinivas, Trevor Darrell, Kurt Keutzer
A common practice in unsupervised representation learning is to use labeled data to evaluate the quality of the learned representations.
no code implementations • 28 Sep 2020 • Elad Levi, Tete Xiao, Xiaolong Wang, Trevor Darrell
We theoretically prove and empirically show that under reasonable noise assumptions, prevalent embedding losses in metric learning, e. g., triplet loss, tend to project all samples of a class with various modes onto a single point in the embedding space, resulting in a class collapse that usually renders the space ill-sorted for classification or retrieval.
1 code implementation • ICLR 2021 • Sayna Ebrahimi, Suzanne Petryk, Akash Gokul, William Gan, Joseph E. Gonzalez, Marcus Rohrbach, Trevor Darrell
The goal of continual learning (CL) is to learn a sequence of tasks without suffering from the phenomenon of catastrophic forgetting.
no code implementations • CVPR 2021 • Bo Li, Yezhen Wang, Shanghang Zhang, Dongsheng Li, Trevor Darrell, Kurt Keutzer, Han Zhao
First, we provide a finite sample bound for both classification and regression problems under Semi-DA.
no code implementations • NeurIPS 2020 • Baifeng Shi, Judy Hoffman, Kate Saenko, Trevor Darrell, Huijuan Xu
By adjusting the auxiliary task weights to minimize the divergence between the surrogate prior and the true prior of the main task, we obtain a more accurate prior estimation, achieving the goal of minimizing the required amount of training data for the main task and avoiding a costly grid search.
no code implementations • NAACL 2021 • Rodolfo Corona, Daniel Fried, Coline Devin, Dan Klein, Trevor Darrell
In our approach, subgoal modules each carry out natural language instructions for a specific subgoal type.
no code implementations • NeurIPS 2020 • Chuan Wen, Jierui Lin, Trevor Darrell, Dinesh Jayaraman, Yang Gao
Imitation learning trains policies to map from input observations to the actions that an expert would choose.
no code implementations • ICCV 2021 • Baifeng Shi, Qi Dai, Judy Hoffman, Kate Saenko, Trevor Darrell, Huijuan Xu
We extensively benchmark against the baselines for SSAD and OSAD on our created data splits in THUMOS14 and ActivityNet1. 2, and demonstrate the effectiveness of the proposed UFA and IB methods.
no code implementations • 18 Dec 2020 • Sayna Ebrahimi, William Gan, Dian Chen, Giscard Biamby, Kamyar Salahi, Michael Laielli, Shizhan Zhu, Trevor Darrell
Active learning aims to develop label-efficient algorithms by querying the most representative samples to be labeled by a human annotator.
no code implementations • 1 Jan 2021 • Medhini Narasimhan, Shiry Ginosar, Andrew Owens, Alexei A Efros, Trevor Darrell
By randomly traversing edges with high transition probabilities, we generate diverse temporally smooth videos with novel sequences and transitions.
1 code implementation • ICLR 2021 • Zhuang Liu, Xuanlin Li, Bingyi Kang, Trevor Darrell
In this work, we present the first comprehensive study of regularization techniques with multiple policy optimization algorithms on continuous control tasks.
no code implementations • 1 Jan 2021 • Dong Huk Park, Trevor Darrell
To this end, reconstruction-based learning is often used in which the normality of an observation is expressed in how well it can be reconstructed.
1 code implementation • ICLR 2021 • Xuanlin Li, Brandon Trabucco, Dong Huk Park, Michael Luo, Sheng Shen, Trevor Darrell, Yang Gao
One strategy to recover this information is to decode both the content and location of tokens.
no code implementations • 1 Jan 2021 • Samaneh Azadi, Michael Tschannen, Eric Tzeng, Sylvain Gelly, Trevor Darrell, Mario Lucic
Coupling the high-fidelity generation capabilities of label-conditional image synthesis methods with the flexibility of unconditional generative models, we propose a semantic bottleneck GAN model for unconditional synthesis of complex scenes.
1 code implementation • 14 Jan 2021 • Jinkun Cao, Xin Wang, Trevor Darrell, Fisher Yu
To decide the action at each step, we seek the action sequence that can lead to safe future states based on the prediction module outputs by repeatedly sampling likely action sequences.
1 code implementation • 12 Mar 2021 • Hou-Ning Hu, Yung-Hsu Yang, Tobias Fischer, Trevor Darrell, Fisher Yu, Min Sun
Experiments on our proposed simulation data and real-world benchmarks, including KITTI, nuScenes, and Waymo datasets, show that our tracking framework offers robust object association and tracking on urban-driving scenarios.
Ranked #7 on Multiple Object Tracking on KITTI Tracking test
1 code implementation • 23 Mar 2021 • Colorado J. Reed, Xiangyu Yue, Ani Nrusimha, Sayna Ebrahimi, Vivek Vijaykumar, Richard Mao, Bo Li, Shanghang Zhang, Devin Guillory, Sean Metzger, Kurt Keutzer, Trevor Darrell
Through experimentation on 16 diverse vision datasets, we show HPT converges up to 80x faster, improves accuracy across tasks, and improves the robustness of the self-supervised pretraining process to changes in the image augmentation policy or amount of pretraining data.