no code implementations • 20 Feb 2023 • Francesco Croce, Sylvestre-Alvise Rebuffi, Evan Shelhamer, Sven Gowal
Adversarial training is widely used to make classifiers robust to a specific threat or adversary, such as $\ell_p$-norm bounded perturbations of a given $p$-norm.
no code implementations • 30 Sep 2022 • Skanda Koppula, Yazhe Li, Evan Shelhamer, Andrew Jaegle, Nikhil Parthasarathy, Relja Arandjelovic, João Carreira, Olivier Hénaff
Self-supervised methods have achieved remarkable success in transfer learning, often achieving the same or better accuracy than supervised pre-training.
1 code implementation • 7 Jul 2022 • Jin Gao, Jialing Zhang, Xihui Liu, Trevor Darrell, Evan Shelhamer, Dequan Wang
We instead update the target data, by projecting all test inputs toward the source domain with a generative diffusion model.
no code implementations • 16 Mar 2022 • Olivier J. Hénaff, Skanda Koppula, Evan Shelhamer, Daniel Zoran, Andrew Jaegle, Andrew Zisserman, João Carreira, Relja Arandjelović
The promise of self-supervised learning (SSL) is to leverage large amounts of unlabeled data to solve complex tasks.
1 code implementation • 28 Feb 2022 • Francesco Croce, Sven Gowal, Thomas Brunner, Evan Shelhamer, Matthias Hein, Taylan Cemgil
Adaptive defenses, which optimize at test time, promise to improve adversarial robustness.
2 code implementations • 22 Feb 2022 • Joao Carreira, Skanda Koppula, Daniel Zoran, Adria Recasens, Catalin Ionescu, Olivier Henaff, Evan Shelhamer, Relja Arandjelovic, Matt Botvinick, Oriol Vinyals, Karen Simonyan, Andrew Zisserman, Andrew Jaegle
This however hinders them from scaling up to the inputs sizes required to process raw high-resolution images or video.
1 code implementation • 2 Sep 2021 • Dequan Wang, Shaoteng Liu, Sayna Ebrahimi, Evan Shelhamer, Trevor Darrell
Domain adaptation seeks to mitigate the shift between training on the \emph{source} domain and testing on the \emph{target} domain.
7 code implementations • ICLR 2022 • Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, Joāo Carreira
A central goal of machine learning is the development of systems that can solve many problems in as many data domains as possible.
Ranked #1 on
Optical Flow Estimation
on KITTI 2015
(Average End-Point Error metric)
2 code implementations • NeurIPS 2021 • Dequan Wang, An Ju, Evan Shelhamer, David Wagner, Trevor Darrell
Adversarial attacks optimize against models to defeat defenses.
1 code implementation • ICLR 2022 • Zhuang Liu, Zhiqiu Xu, Hung-Ju Wang, Trevor Darrell, Evan Shelhamer
A cascade of "exits" is attached to the model to make multiple predictions.
no code implementations • 12 Jul 2020 • Mark Hamilton, Evan Shelhamer, William T. Freeman
Joint optimization of these "likelihood parameters" with model parameters can adaptively tune the scales and shapes of losses in addition to the strength of regularization.
2 code implementations • ICLR 2021 • Dequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, Trevor Darrell
A model must adapt itself to generalize to new and different data during testing.
1 code implementation • 21 Oct 2019 • Zhuang Liu, Hung-Ju Wang, Tinghui Zhou, Zhiqiang Shen, Bingyi Kang, Evan Shelhamer, Trevor Darrell
Interestingly, the processing model's ability to enhance recognition quality can transfer when evaluated on models of different architectures, recognized categories, tasks and training datasets.
no code implementations • 25 Sep 2019 • Evan Shelhamer, Dequan Wang, Trevor Darrell
Adapting receptive fields by dynamic Gaussian structure further improves results, equaling the accuracy of free-form deformation while improving efficiency.
no code implementations • 8 Aug 2019 • Dequan Wang, Evan Shelhamer, Bruno Olshausen, Trevor Darrell
Given the variety of the visual world there is not one true scale for recognition: objects may appear at drastically different sizes across the visual field.
no code implementations • 25 Apr 2019 • Evan Shelhamer, Dequan Wang, Trevor Darrell
Adapting receptive fields by dynamic Gaussian structure further improves results, equaling the accuracy of free-form deformation while improving efficiency.
no code implementations • ICLR Workshop LLD 2019 • Evan Shelhamer, Dequan Wang, Trevor Darrell
The visual world is vast and varied, but its variations divide into structured and unstructured factors.
no code implementations • 12 Feb 2019 • Kelsey R. Allen, Evan Shelhamer, Hanul Shin, Joshua B. Tenenbaum
We propose infinite mixture prototypes to adaptively represent both simple and complex data distributions for few-shot learning.
no code implementations • 27 Sep 2018 • Kelsey R Allen, Hanul Shin, Evan Shelhamer, Josh B. Tenenbaum
On the standard few-shot learning benchmarks of Omniglot and mini-ImageNet, BANDE equals or improves on the state-of-the-art for semi-supervised classification.
1 code implementation • 25 May 2018 • Kate Rakelly, Evan Shelhamer, Trevor Darrell, Alexei A. Efros, Sergey Levine
Learning-based methods for visual segmentation have made progress on particular types of segmentation tasks, but are limited by the necessary supervision, the narrow definitions of fixed tasks, and the lack of control during inference for correcting errors.
1 code implementation • ICLR 2018 • Deepak Pathak, Parsa Mahmoudieh, Guanghao Luo, Pulkit Agrawal, Dian Chen, Yide Shentu, Evan Shelhamer, Jitendra Malik, Alexei A. Efros, Trevor Darrell
In our framework, the role of the expert is only to communicate the goals (i. e., what to imitate) during inference.
6 code implementations • CVPR 2018 • Fisher Yu, Dequan Wang, Evan Shelhamer, Trevor Darrell
We augment standard architectures with deeper aggregation to better fuse information across layers.
no code implementations • 21 Dec 2016 • Evan Shelhamer, Parsa Mahmoudieh, Max Argus, Trevor Darrell
Reinforcement learning optimizes policies for expected cumulative reward.
1 code implementation • 11 Aug 2016 • Evan Shelhamer, Kate Rakelly, Judy Hoffman, Trevor Darrell
Recent years have seen tremendous progress in still-image segmentation; however the na\"ive application of these state-of-the-art algorithms to every video frame requires considerable computation and ignores the temporal continuity inherent in video.
36 code implementations • CVPR 2015 • Evan Shelhamer, Jonathan Long, Trevor Darrell
Convolutional networks are powerful visual models that yield hierarchies of features.
Ranked #1 on
Semantic Segmentation
on NYU Depth v2
(Mean Accuracy metric)
no code implementations • 22 Nov 2015 • Ning Zhang, Evan Shelhamer, Yang Gao, Trevor Darrell
Pose variation and subtle differences in appearance are key challenges to fine-grained classification.
2 code implementations • 22 Dec 2014 • Deepak Pathak, Evan Shelhamer, Jonathan Long, Trevor Darrell
We propose a novel MIL formulation of multi-class semantic segmentation learning by a fully convolutional network.
Multiple Instance Learning
Weakly-Supervised Semantic Segmentation
50 code implementations • CVPR 2015 • Jonathan Long, Evan Shelhamer, Trevor Darrell
Convolutional networks are powerful visual models that yield hierarchies of features.
Ranked #2 on
Semantic Segmentation
on SkyScapes-Lane
3 code implementations • 3 Oct 2014 • Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, Evan Shelhamer
To address this problem, we have created a library similar in intent to BLAS, with optimized routines for deep learning workloads.
2 code implementations • 20 Jun 2014 • Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, Trevor Darrell
The framework is a BSD-licensed C++ library with Python and MATLAB bindings for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures.