Search Results for author: Avinash Ravichandran

Found 31 papers, 13 papers with code

Active Frame, Location, and Detector Selection for Automated and Manual Video Annotation

no code implementations CVPR 2014 Vasiliy Karasev, Avinash Ravichandran, Stefano Soatto

We describe an information-driven active selection approach to determine which detectors to deploy at which location in which frame of a video to minimize semantic class label uncertainty at every pixel, with the smallest computational cost that ensures a given uncertainty bound.

Task2Vec: Task Embedding for Meta-Learning

1 code implementation ICCV 2019 Alessandro Achille, Michael Lam, Rahul Tewari, Avinash Ravichandran, Subhransu Maji, Charless Fowlkes, Stefano Soatto, Pietro Perona

We demonstrate that this embedding is capable of predicting task similarities that match our intuition about semantic and taxonomic relations between different visual tasks (e. g., tasks based on classifying different types of plants are similar) We also demonstrate the practical value of this framework for the meta-task of selecting a pre-trained feature extractor for a new task.

Meta-Learning

Meta-Learning with Differentiable Convex Optimization

7 code implementations CVPR 2019 Kwonjoon Lee, Subhransu Maji, Avinash Ravichandran, Stefano Soatto

We propose to use these predictors as base learners to learn representations for few-shot learning and show they offer better tradeoffs between feature size and performance across a range of few-shot recognition benchmarks.

Few-Shot Image Classification Few-Shot Learning

Few-Shot Learning with Embedded Class Models and Shot-Free Meta Training

no code implementations ICCV 2019 Avinash Ravichandran, Rahul Bhotika, Stefano Soatto

We propose a method for learning embeddings for few-shot learning that is suitable for use with any number of ways and any number of shots (shot-free).

Few-Shot Learning Metric Learning

A Baseline for Few-Shot Image Classification

3 code implementations ICLR 2020 Guneet S. Dhillon, Pratik Chaudhari, Avinash Ravichandran, Stefano Soatto

When fine-tuned transductively, this outperforms the current state-of-the-art on standard datasets such as Mini-ImageNet, Tiered-ImageNet, CIFAR-FS and FC-100 with the same hyper-parameters.

Classification Few-Shot Image Classification +2

Unbiased Evaluation of Deep Metric Learning Algorithms

1 code implementation28 Nov 2019 Istvan Fehervari, Avinash Ravichandran, Srikar Appalaraju

Deep metric learning (DML) is a popular approach for images retrieval, solving verification (same or not) problems and addressing open set classification.

Attribute Metric Learning +2

Incremental Meta-Learning via Indirect Discriminant Alignment

no code implementations11 Feb 2020 Qing Liu, Orchid Majumder, Alessandro Achille, Avinash Ravichandran, Rahul Bhotika, Stefano Soatto

Majority of the modern meta-learning methods for few-shot classification tasks operate in two phases: a meta-training phase where the meta-learner learns a generic representation by solving multiple few-shot tasks sampled from a large dataset and a testing phase, where the meta-learner leverages its learnt internal representation for a specific few-shot task involving classes which were not seen during the meta-training phase.

Incremental Learning Meta-Learning

Multi-Task Incremental Learning for Object Detection

no code implementations13 Feb 2020 Xialei Liu, Hao Yang, Avinash Ravichandran, Rahul Bhotika, Stefano Soatto

For the difficult cases, where the domain gaps and especially category differences are large, we explore three different exemplar sampling methods and show the proposed adaptive sampling method is effective to select diverse and informative samples from entire datasets, to further prevent forgetting.

Incremental Learning Object +2

Rethinking the Hyperparameters for Fine-tuning

1 code implementation ICLR 2020 Hao Li, Pratik Chaudhari, Hao Yang, Michael Lam, Avinash Ravichandran, Rahul Bhotika, Stefano Soatto

Our findings challenge common practices of fine-tuning and encourages deep learning practitioners to rethink the hyperparameters for fine-tuning.

Transfer Learning

Predicting Training Time Without Training

no code implementations NeurIPS 2020 Luca Zancato, Alessandro Achille, Avinash Ravichandran, Rahul Bhotika, Stefano Soatto

We tackle the problem of predicting the number of optimization steps that a pre-trained deep network needs to converge to a given value of the loss function.

LQF: Linear Quadratic Fine-Tuning

no code implementations CVPR 2021 Alessandro Achille, Aditya Golatkar, Avinash Ravichandran, Marzia Polito, Stefano Soatto

Classifiers that are linear in their parameters, and trained by optimizing a convex loss function, have predictable behavior with respect to changes in the training data, initial conditions, and optimization.

Image Classification

Mixed-Privacy Forgetting in Deep Networks

no code implementations CVPR 2021 Aditya Golatkar, Alessandro Achille, Avinash Ravichandran, Marzia Polito, Stefano Soatto

We show that the influence of a subset of the training samples can be removed -- or "forgotten" -- from the weights of a network trained on large-scale image classification tasks, and we provide strong computable bounds on the amount of remaining information after forgetting.

Image Classification

Estimating informativeness of samples with Smooth Unique Information

1 code implementation ICLR 2021 Hrayr Harutyunyan, Alessandro Achille, Giovanni Paolini, Orchid Majumder, Avinash Ravichandran, Rahul Bhotika, Stefano Soatto

We define a notion of information that an individual sample provides to the training of a neural network, and we specialize it to measure both how much a sample informs the final weights and how much it informs the function computed by the weights.

Informativeness

Exponential Moving Average Normalization for Self-supervised and Semi-supervised Learning

1 code implementation CVPR 2021 Zhaowei Cai, Avinash Ravichandran, Subhransu Maji, Charless Fowlkes, Zhuowen Tu, Stefano Soatto

We present a plug-in replacement for batch normalization (BN) called exponential moving average normalization (EMAN), which improves the performance of existing student-teacher based self- and semi-supervised learning techniques.

Self-Supervised Learning Semi-Supervised Image Classification

Supervised Momentum Contrastive Learning for Few-Shot Classification

no code implementations26 Jan 2021 Orchid Majumder, Avinash Ravichandran, Subhransu Maji, Alessandro Achille, Marzia Polito, Stefano Soatto

In this work we investigate the complementary roles of these two sources of information by combining instance-discriminative contrastive learning and supervised learning in a single framework called Supervised Momentum Contrastive learning (SUPMOCO).

Classification Contrastive Learning +4

A linearized framework and a new benchmark for model selection for fine-tuning

no code implementations29 Jan 2021 Aditya Deshpande, Alessandro Achille, Avinash Ravichandran, Hao Li, Luca Zancato, Charless Fowlkes, Rahul Bhotika, Stefano Soatto, Pietro Perona

Since all model selection algorithms in the literature have been tested on different use-cases and never compared directly, we introduce a new comprehensive benchmark for model selection comprising of: i) A model zoo of single and multi-domain models, and ii) Many target tasks.

Feature Correlation Model Selection

Representation Consolidation for Training Expert Students

no code implementations16 Jul 2021 Zhizhong Li, Avinash Ravichandran, Charless Fowlkes, Marzia Polito, Rahul Bhotika, Stefano Soatto

Traditionally, distillation has been used to train a student model to emulate the input/output functionality of a teacher.

Uniform Sampling over Episode Difficulty

2 code implementations NeurIPS 2021 Sébastien M. R. Arnold, Guneet S. Dhillon, Avinash Ravichandran, Stefano Soatto

Episodic training is a core ingredient of few-shot learning to train models on tasks with limited labelled data.

Few-Shot Learning

Representation Consolidation from Multiple Expert Teachers

no code implementations29 Sep 2021 Zhizhong Li, Avinash Ravichandran, Charless Fowlkes, Marzia Polito, Rahul Bhotika, Stefano Soatto

Indeed, we observe experimentally that standard distillation of task-specific teachers, or using these teacher representations directly, **reduces** downstream transferability compared to a task-agnostic generalist model.

Knowledge Distillation

DIVA: Dataset Derivative of a Learning Task

no code implementations ICLR 2022 Yonatan Dukler, Alessandro Achille, Giovanni Paolini, Avinash Ravichandran, Marzia Polito, Stefano Soatto

A learning task is a function from a training set to the validation error, which can be represented by a trained deep neural network (DNN).

AutoML

Task Adaptive Parameter Sharing for Multi-Task Learning

1 code implementation CVPR 2022 Matthew Wallingford, Hao Li, Alessandro Achille, Avinash Ravichandran, Charless Fowlkes, Rahul Bhotika, Stefano Soatto

TAPS solves a joint optimization problem which determines which layers to share with the base model and the value of the task-specific weights.

Multi-Task Learning

Class-Incremental Learning with Strong Pre-trained Models

1 code implementation CVPR 2022 Tz-Ying Wu, Gurumurthy Swaminathan, Zhizhong Li, Avinash Ravichandran, Nuno Vasconcelos, Rahul Bhotika, Stefano Soatto

We hypothesize that a strong base model can provide a good representation for novel classes and incremental learning can be done with small adaptations.

Class Incremental Learning Incremental Learning

X-DETR: A Versatile Architecture for Instance-wise Vision-Language Tasks

no code implementations12 Apr 2022 Zhaowei Cai, Gukyeong Kwon, Avinash Ravichandran, Erhan Bas, Zhuowen Tu, Rahul Bhotika, Stefano Soatto

In this paper, we study the challenging instance-wise vision-language tasks, where the free-form language is required to align with the objects instead of the whole image.

Rethinking Few-Shot Object Detection on a Multi-Domain Benchmark

1 code implementation22 Jul 2022 Kibok Lee, Hao Yang, Satyaki Chakraborty, Zhaowei Cai, Gurumurthy Swaminathan, Avinash Ravichandran, Onkar Dabeer

Most existing works on few-shot object detection (FSOD) focus on a setting where both pre-training and few-shot learning datasets are from a similar domain.

Few-Shot Learning Few-Shot Object Detection +1

Masked Vision and Language Modeling for Multi-modal Representation Learning

no code implementations3 Aug 2022 Gukyeong Kwon, Zhaowei Cai, Avinash Ravichandran, Erhan Bas, Rahul Bhotika, Stefano Soatto

Instead of developing masked language modeling (MLM) and masked image modeling (MIM) independently, we propose to build joint masked vision and language modeling, where the masked signal of one modality is reconstructed with the help from another modality.

Language Modelling Masked Language Modeling +1

Semi-supervised Vision Transformers at Scale

1 code implementation11 Aug 2022 Zhaowei Cai, Avinash Ravichandran, Paolo Favaro, Manchen Wang, Davide Modolo, Rahul Bhotika, Zhuowen Tu, Stefano Soatto

We study semi-supervised learning (SSL) for vision transformers (ViT), an under-explored topic despite the wide adoption of the ViT architectures to different tasks.

Inductive Bias Semi-Supervised Image Classification

Learning Expressive Prompting With Residuals for Vision Transformers

no code implementations CVPR 2023 Rajshekhar Das, Yonatan Dukler, Avinash Ravichandran, Ashwin Swaminathan

Prompt learning is an efficient approach to adapt transformers by inserting learnable set of parameters into the input and intermediate representations of a pre-trained model.

Few-Shot Learning Image Classification +2

Incremental Few-Shot Meta-Learning via Indirect Discriminant Alignment

no code implementations ECCV 2020 Qing Liu, Orchid Majumder, Alessandro Achille, Avinash Ravichandran, Rahul Bhotika, Stefano Soatto

This process enables incrementally improving the model by processing multiple learning episodes, each representing a different learning task, even with few training examples.

Few-Shot Learning Incremental Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.