Search Results for author: Dhruv Mahajan

Found 25 papers, 8 papers with code

Large-Scale Attribute-Object Compositions

no code implementations24 May 2021 Filip Radenovic, Animesh Sinha, Albert Gordo, Tamara Berg, Dhruv Mahajan

We study the problem of learning how to predict attribute-object compositions from images, and its generalization to unseen compositions missing from the training data.

Adaptive Methods for Real-World Domain Generalization

no code implementations CVPR 2021 Abhimanyu Dubey, Vignesh Ramanathan, Alex Pentland, Dhruv Mahajan

We show that the existing approaches either do not scale to this dataset or underperform compared to the simple baseline of training a model on the union of data from all training domains.

Domain Generalization

Weakly Supervised Instance Segmentation for Videos with Temporal Mask Consistency

no code implementations CVPR 2021 Qing Liu, Vignesh Ramanathan, Dhruv Mahajan, Alan Yuille, Zhenheng Yang

However, existing approaches which rely only on image-level class labels predominantly suffer from errors due to (a) partial segmentation of objects and (b) missing object predictions.

Instance Segmentation Semantic Segmentation +1

What leads to generalization of object proposals?

no code implementations13 Aug 2020 Rui Wang, Dhruv Mahajan, Vignesh Ramanathan

It is lucrative to train a good proposal model, that generalizes to unseen classes.

Object Proposal Generation

Measuring Dataset Granularity

1 code implementation21 Dec 2019 Yin Cui, Zeqi Gu, Dhruv Mahajan, Laurens van der Maaten, Serge Belongie, Ser-Nam Lim

We also investigate the interplay between dataset granularity with a variety of factors and find that fine-grained datasets are more difficult to learn from, more difficult to transfer to, more difficult to perform few-shot learning with, and more vulnerable to adversarial attacks.

Few-Shot Learning

From Patches to Pictures (PaQ-2-PiQ): Mapping the Perceptual Space of Picture Quality

2 code implementations CVPR 2020 Zhenqiang Ying, Haoran Niu, Praful Gupta, Dhruv Mahajan, Deepti Ghadiyaram, Alan Bovik

Blind or no-reference (NR) perceptual picture quality prediction is a difficult, unsolved problem of great consequence to the social and streaming media industries that impacts billions of viewers daily.

ClusterFit: Improving Generalization of Visual Representations

no code implementations CVPR 2020 Xueting Yan, Ishan Misra, Abhinav Gupta, Deepti Ghadiyaram, Dhruv Mahajan

Pre-training convolutional neural networks with weakly-supervised and self-supervised strategies is becoming increasingly popular for several computer vision tasks.

Action Classification Image Classification +1

Self-Supervised Learning by Cross-Modal Audio-Video Clustering

1 code implementation NeurIPS 2020 Humam Alwassel, Dhruv Mahajan, Bruno Korbar, Lorenzo Torresani, Bernard Ghanem, Du Tran

To the best of our knowledge, XDC is the first self-supervised learning method that outperforms large-scale fully-supervised pretraining for action recognition on the same architecture.

Audio Classification Deep Clustering +4

Billion-scale semi-supervised learning for image classification

2 code implementations2 May 2019 I. Zeki Yalniz, Hervé Jégou, Kan Chen, Manohar Paluri, Dhruv Mahajan

This paper presents a study of semi-supervised learning with large convolutional networks.

Ranked #85 on Image Classification on ImageNet (using extra training data)

Classification General Classification +2

Large-scale weakly-supervised pre-training for video action recognition

3 code implementations CVPR 2019 Deepti Ghadiyaram, Matt Feiszli, Du Tran, Xueting Yan, Heng Wang, Dhruv Mahajan

Second, frame-based models perform quite well on action recognition; is pre-training for good image features sufficient or is pre-training for spatio-temporal features valuable for optimal transfer learning?

 Ranked #1 on Egocentric Activity Recognition on EPIC-KITCHENS-55 (Actions Top-1 (S2) metric)

Action Classification Action Recognition +3

Defense Against Adversarial Images using Web-Scale Nearest-Neighbor Search

no code implementations CVPR 2019 Abhimanyu Dubey, Laurens van der Maaten, Zeki Yalniz, Yixuan Li, Dhruv Mahajan

Empirical evaluations of this defense strategy on ImageNet suggest that it is very effective in attack settings in which the adversary does not have access to the image database.

Distributed Newton Methods for Deep Neural Networks

no code implementations1 Feb 2018 Chien-Chih Wang, Kent Loong Tan, Chun-Ting Chen, Yu-Hsiang Lin, S. Sathiya Keerthi, Dhruv Mahajan, S. Sundararajan, Chih-Jen Lin

First, to reduce the communication cost, we propose a diagonalization method such that an approximate Newton direction can be obtained without communication between machines.

Efficient Estimation of Generalization Error and Bias-Variance Components of Ensembles

no code implementations15 Nov 2017 Dhruv Mahajan, Vivek Gupta, S. Sathiya Keerthi, Sellamanickam Sundararajan, Shravan Narayanamurthy, Rahul Kidambi

We also demonstrate their usefulness in making design choices such as the number of classifiers in the ensemble and the size of a subset of data used for training that is needed to achieve a certain value of generalization error.

Batch-Expansion Training: An Efficient Optimization Framework

no code implementations22 Apr 2017 Michał Dereziński, Dhruv Mahajan, S. Sathiya Keerthi, S. V. N. Vishwanathan, Markus Weimer

We propose Batch-Expansion Training (BET), a framework for running a batch optimizer on a gradually expanding dataset.

Towards Geo-Distributed Machine Learning

no code implementations30 Mar 2016 Ignacio Cano, Markus Weimer, Dhruv Mahajan, Carlo Curino, Giovanni Matteo Fumarola

Current solutions to learning from geo-distributed data sources revolve around the idea of first centralizing the data in one data center, and then training locally.

A distributed block coordinate descent method for training $l_1$ regularized linear classifiers

no code implementations18 May 2014 Dhruv Mahajan, S. Sathiya Keerthi, S. Sundararajan

In this paper we design a distributed algorithm for $l_1$ regularization that is much better suited for such systems than existing algorithms.

A Distributed Algorithm for Training Nonlinear Kernel Machines

no code implementations18 May 2014 Dhruv Mahajan, S. Sathiya Keerthi, S. Sundararajan

This paper concerns the distributed training of nonlinear kernel machines on Map-Reduce.

An efficient distributed learning algorithm based on effective local functional approximations

no code implementations31 Oct 2013 Dhruv Mahajan, Nikunj Agrawal, S. Sathiya Keerthi, S. Sundararajan, Leon Bottou

In this paper we give a novel approach to the distributed training of linear classifiers (involving smooth losses and L2 regularization) that is designed to reduce the total communication costs.

L2 Regularization

Cannot find the paper you are looking for? You can Submit a new open access paper.