Search Results for author: Evan Shelhamer

Found 31 papers, 19 papers with code

Seasoning Model Soups for Robustness to Adversarial and Natural Distribution Shifts

no code implementations CVPR 2023 Francesco Croce, Sylvestre-Alvise Rebuffi, Evan Shelhamer, Sven Gowal

Adversarial training is widely used to make classifiers robust to a specific threat or adversary, such as $\ell_p$-norm bounded perturbations of a given $p$-norm.

Back to the Source: Diffusion-Driven Adaptation To Test-Time Corruption

no code implementations CVPR 2023 Jin Gao, Jialing Zhang, Xihui Liu, Trevor Darrell, Evan Shelhamer, Dequan Wang

We update the target data instead, and project all test inputs toward the source domain with a generative diffusion model.

Test-time Adaptation

Back to the Source: Diffusion-Driven Test-Time Adaptation

1 code implementation7 Jul 2022 Jin Gao, Jialing Zhang, Xihui Liu, Trevor Darrell, Evan Shelhamer, Dequan Wang

We instead update the target data, by projecting all test inputs toward the source domain with a generative diffusion model.

Test-time Adaptation

HiP: Hierarchical Perceiver

2 code implementations22 Feb 2022 Joao Carreira, Skanda Koppula, Daniel Zoran, Adria Recasens, Catalin Ionescu, Olivier Henaff, Evan Shelhamer, Relja Arandjelovic, Matt Botvinick, Oriol Vinyals, Karen Simonyan, Andrew Zisserman, Andrew Jaegle

This however hinders them from scaling up to the inputs sizes required to process raw high-resolution images or video.

On-target Adaptation

1 code implementation2 Sep 2021 Dequan Wang, Shaoteng Liu, Sayna Ebrahimi, Evan Shelhamer, Trevor Darrell

Domain adaptation seeks to mitigate the shift between training on the \emph{source} domain and testing on the \emph{target} domain.

Domain Adaptation

It Is Likely That Your Loss Should be a Likelihood

no code implementations12 Jul 2020 Mark Hamilton, Evan Shelhamer, William T. Freeman

Joint optimization of these "likelihood parameters" with model parameters can adaptively tune the scales and shapes of losses in addition to the strength of regularization.

Outlier Detection

Exploring Simple and Transferable Recognition-Aware Image Processing

1 code implementation21 Oct 2019 Zhuang Liu, Hung-Ju Wang, Tinghui Zhou, Zhiqiang Shen, Bingyi Kang, Evan Shelhamer, Trevor Darrell

Interestingly, the processing model's ability to enhance recognition quality can transfer when evaluated on models of different architectures, recognized categories, tasks and training datasets.

Image Retrieval Recommendation Systems

Blurring Structure and Learning to Optimize and Adapt Receptive Fields

no code implementations25 Sep 2019 Evan Shelhamer, Dequan Wang, Trevor Darrell

Adapting receptive fields by dynamic Gaussian structure further improves results, equaling the accuracy of free-form deformation while improving efficiency.

Semantic Segmentation

Dynamic Scale Inference by Entropy Minimization

no code implementations8 Aug 2019 Dequan Wang, Evan Shelhamer, Bruno Olshausen, Trevor Darrell

Given the variety of the visual world there is not one true scale for recognition: objects may appear at drastically different sizes across the visual field.

Semantic Segmentation

Blurring the Line Between Structure and Learning to Optimize and Adapt Receptive Fields

no code implementations25 Apr 2019 Evan Shelhamer, Dequan Wang, Trevor Darrell

Adapting receptive fields by dynamic Gaussian structure further improves results, equaling the accuracy of free-form deformation while improving efficiency.

Semantic Segmentation

Infinite Mixture Prototypes for Few-Shot Learning

no code implementations12 Feb 2019 Kelsey R. Allen, Evan Shelhamer, Hanul Shin, Joshua B. Tenenbaum

We propose infinite mixture prototypes to adaptively represent both simple and complex data distributions for few-shot learning.

Clustering Few-Shot Learning

Variadic Learning by Bayesian Nonparametric Deep Embedding

no code implementations27 Sep 2018 Kelsey R Allen, Hanul Shin, Evan Shelhamer, Josh B. Tenenbaum

On the standard few-shot learning benchmarks of Omniglot and mini-ImageNet, BANDE equals or improves on the state-of-the-art for semi-supervised classification.

Clustering Few-Shot Learning +1

Few-Shot Segmentation Propagation with Guided Networks

1 code implementation25 May 2018 Kate Rakelly, Evan Shelhamer, Trevor Darrell, Alexei A. Efros, Sergey Levine

Learning-based methods for visual segmentation have made progress on particular types of segmentation tasks, but are limited by the necessary supervision, the narrow definitions of fixed tasks, and the lack of control during inference for correcting errors.

Interactive Segmentation Segmentation +3

Deep Layer Aggregation

6 code implementations CVPR 2018 Fisher Yu, Dequan Wang, Evan Shelhamer, Trevor Darrell

We augment standard architectures with deeper aggregation to better fuse information across layers.

Image Classification

Clockwork Convnets for Video Semantic Segmentation

1 code implementation11 Aug 2016 Evan Shelhamer, Kate Rakelly, Judy Hoffman, Trevor Darrell

Recent years have seen tremendous progress in still-image segmentation; however the na\"ive application of these state-of-the-art algorithms to every video frame requires considerable computation and ignores the temporal continuity inherent in video.

Image Segmentation Scheduling +4

Fine-grained pose prediction, normalization, and recognition

no code implementations22 Nov 2015 Ning Zhang, Evan Shelhamer, Yang Gao, Trevor Darrell

Pose variation and subtle differences in appearance are key challenges to fine-grained classification.

General Classification Pose Prediction

Fully Convolutional Multi-Class Multiple Instance Learning

1 code implementation22 Dec 2014 Deepak Pathak, Evan Shelhamer, Jonathan Long, Trevor Darrell

We propose a novel MIL formulation of multi-class semantic segmentation learning by a fully convolutional network.

Multiple Instance Learning Segmentation +1

cuDNN: Efficient Primitives for Deep Learning

3 code implementations3 Oct 2014 Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, Evan Shelhamer

To address this problem, we have created a library similar in intent to BLAS, with optimized routines for deep learning workloads.

Caffe: Convolutional Architecture for Fast Feature Embedding

2 code implementations20 Jun 2014 Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, Trevor Darrell

The framework is a BSD-licensed C++ library with Python and MATLAB bindings for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures.

Clustering Dimensionality Reduction +1

Cannot find the paper you are looking for? You can Submit a new open access paper.