Search Results for author: Dilip Krishnan

Found 27 papers, 14 papers with code

Object-Aware Cropping for Self-Supervised Learning

1 code implementation1 Dec 2021 Shlok Mishra, Anshul Shah, Ankan Bansal, Abhyuday Jagannatha, Abhishek Sharma, David Jacobs, Dilip Krishnan

This assumption is mostly satisfied in datasets such as ImageNet where there is a large, centered object, which is highly likely to be present in random crops of the full image.

Data Augmentation Object Detection +1

Pyramid Adversarial Training Improves ViT Performance

no code implementations30 Nov 2021 Charles Herrmann, Kyle Sargent, Lu Jiang, Ramin Zabih, Huiwen Chang, Ce Liu, Dilip Krishnan, Deqing Sun

In this work, we present Pyramid Adversarial Training, a simple and effective technique to improve ViT's overall performance.

Ranked #3 on Domain Generalization on ImageNet-C (using extra training data)

Adversarial Attack Data Augmentation +1

Contrastive Multiview Coding for Enzyme-Substrate Interaction Prediction

no code implementations18 Nov 2021 Apurva Kalia, Dilip Krishnan, Soha Hassoun

Characterizing Enzyme function is an important requirement for predicting Enzyme-Substrate interactions.

Unsupervised Disentanglement without Autoencoding: Pitfalls and Future Directions

1 code implementation14 Aug 2021 Andrea Burns, Aaron Sarna, Dilip Krishnan, Aaron Maschinot

Disentangled visual representations have largely been studied with generative models such as Variational AutoEncoders (VAEs).

Contrastive Learning Disentanglement

Understanding Invariance via Feedforward Inversion of Discriminatively Trained Classifiers

no code implementations15 Mar 2021 Piotr Teterwak, Chiyuan Zhang, Dilip Krishnan, Michael C. Mozer

We use our reconstruction model as a tool for exploring the nature of representations, including: the influence of model architecture and training objectives (specifically robust losses), the forms of invariance that networks achieve, representational differences between correctly and incorrectly classified images, and the effects of manipulating logits and images.

What Makes for Good Views for Contrastive Learning?

1 code implementation NeurIPS 2020 Yonglong Tian, Chen Sun, Ben Poole, Dilip Krishnan, Cordelia Schmid, Phillip Isola

Contrastive learning between multiple views of the data has recently achieved state of the art performance in the field of self-supervised representation learning.

Contrastive Learning Data Augmentation +7

Supervised Contrastive Learning

16 code implementations NeurIPS 2020 Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, Dilip Krishnan

Contrastive learning applied to self-supervised representation learning has seen a resurgence in recent years, leading to state of the art performance in the unsupervised training of deep image models.

Contrastive Learning Data Augmentation +3

Rethinking Few-Shot Image Classification: a Good Embedding Is All You Need?

3 code implementations ECCV 2020 Yonglong Tian, Yue Wang, Dilip Krishnan, Joshua B. Tenenbaum, Phillip Isola

The focus of recent meta-learning research has been on the development of learning algorithms that can quickly adapt to test time tasks with limited data and low computational cost.

Few-Shot Image Classification General Classification

Adversarial Robustness through Local Linearization

no code implementations NeurIPS 2019 Chongli Qin, James Martens, Sven Gowal, Dilip Krishnan, Krishnamurthy Dvijotham, Alhussein Fawzi, Soham De, Robert Stanforth, Pushmeet Kohli

Using this regularizer, we exceed current state of the art and achieve 47% adversarial accuracy for ImageNet with l-infinity adversarial perturbations of radius 4/255 under an untargeted, strong, white-box attack.

Adversarial Defense Adversarial Robustness

Contrastive Multiview Coding

6 code implementations ECCV 2020 Yonglong Tian, Dilip Krishnan, Phillip Isola

We analyze key properties of the approach that make it work, finding that the contrastive loss outperforms a popular alternative based on cross-view prediction, and that the more views we learn from, the better the resulting representation captures underlying scene semantics.

Contrastive Learning Self-Supervised Action Recognition +1

A Closed-Form Learned Pooling for Deep Classification Networks

no code implementations10 Jun 2019 Vighnesh Birodkar, Hossein Mobahi, Dilip Krishnan, Samy Bengio

This operator can learn a strict super-set of what can be learned by average pooling or convolutions.

Classification Foveation +2

Predicting the Generalization Gap in Deep Networks with Margin Distributions

1 code implementation ICLR 2019 Yiding Jiang, Dilip Krishnan, Hossein Mobahi, Samy Bengio

In this paper, we propose such a measure, and conduct extensive empirical studies on how well it can predict the generalization gap.

Synthesizing Normalized Faces from Facial Identity Features

1 code implementation CVPR 2017 Forrester Cole, David Belanger, Dilip Krishnan, Aaron Sarna, Inbar Mosseri, William T. Freeman

We present a method for synthesizing a frontal, neutral-expression image of a person's face given an input face photograph.

Domain Separation Networks

5 code implementations NeurIPS 2016 Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, Dumitru Erhan

However, by focusing only on creating a mapping or shared representation between the two domains, they ignore the individual characteristics of each domain.

Domain Generalization Unsupervised Domain Adaptation

Learning Ordinal Relationships for Mid-Level Vision

no code implementations ICCV 2015 Daniel Zoran, Phillip Isola, Dilip Krishnan, William T. Freeman

We demonstrate that this frame- work works well on two important mid-level vision tasks: intrinsic image decomposition and depth from an RGB im- age.

Depth Estimation Frame +1

Learning visual groups from co-occurrences in space and time

2 code implementations21 Nov 2015 Phillip Isola, Daniel Zoran, Dilip Krishnan, Edward H. Adelson

We propose a self-supervised framework that learns to group visual entities based on their rate of co-occurrence in space and time.


Reflection Removal Using Ghosting Cues

no code implementations CVPR 2015 YiChang Shih, Dilip Krishnan, Fredo Durand, William T. Freeman

For single-pane windows, ghosting cues arise from shifted reflections on the two surfaces of the glass pane.

Reflection Removal

Shape and Illumination from Shading using the Generic Viewpoint Assumption

no code implementations NeurIPS 2014 Daniel Zoran, Dilip Krishnan, José Bento, Bill Freeman

The Generic Viewpoint Assumption (GVA) states that the position of the viewer or the light in a scene is not special.

Blind Deconvolution with Non-local Sparsity Reweighting

no code implementations16 Nov 2013 Dilip Krishnan, Joan Bruna, Rob Fergus

Blind deconvolution has made significant progress in the past decade.

Fast Image Deconvolution using Hyper-Laplacian Priors

no code implementations NeurIPS 2009 Dilip Krishnan, Rob Fergus

In this paper we describe a deconvolution approach that is several orders of magnitude faster than existing techniques that use hyper-Laplacian priors.

Deblurring Denoising +2

Cannot find the paper you are looking for? You can Submit a new open access paper.