no code implementations • ICCV 2023 • Anurag Roy, Vinay Kumar Verma, Sravan Voonna, Kripabandhu Ghosh, Saptarshi Ghosh, Abir Das
Although there have been some recent CL approaches for vision transformers, they either store training instances of previous tasks or require a task identifier during test time, which can be limiting.
no code implementations • 27 Jan 2023 • Soumya Banerjee, Vinay Kumar Verma, Vinay P. Namboodiri
Despite rapid advancements in lifelong learning (LLL) research, a large body of research mainly focuses on improving the performance in the existing \textit{static} continual learning (CL) setups.
no code implementations • 23 Oct 2022 • Vinay Kumar Verma, Nikhil Mehta, Shijing Si, Ricardo Henao, Lawrence Carin
Weight pruning is among the most popular approaches for compressing deep convolutional neural networks.
no code implementations • 20 Oct 2021 • Soumya Banerjee, Vinay Kumar Verma, Toufiq Parag, Maneesh Singh, Vinay P. Namboodiri
We propose a novel approach (CIOSL) for the class-incremental learning in an \emph{online streaming setting} to address these challenges.
no code implementations • 5 Oct 2021 • Dhanajit Brahma, Vinay Kumar Verma, Piyush Rai
Further, we present $\textit{Semi-Split CIFAR-10}$, a new benchmark for continual semi-supervised learning, obtained by modifying the $\textit{Split CIFAR-10}$ dataset, in which the tasks with labelled and unlabelled data arrive sequentially.
no code implementations • 12 Jun 2021 • Mohammed Asad Karim, Vinay Kumar Verma, Pravendra Singh, Vinay Namboodiri, Piyush Rai
In our approach, we learn robust representations that are generalizable across tasks without suffering from the problems of catastrophic forgetting and overfitting to accommodate future classes with limited samples.
1 code implementation • CVPR 2021 • Vinay Kumar Verma, Kevin J Liang, Nikhil Mehta, Piyush Rai, Lawrence Carin
However, the growth in the number of additional parameters of many of these types of methods can be computationally expensive at larger scales, at times prohibitively so.
no code implementations • NeurIPS 2021 • Sakshi Varshney, Vinay Kumar Verma, Srijith P K, Lawrence Carin, Piyush Rai
Our approach is based on learning a set of global and task-specific parameters.
no code implementations • 23 Feb 2021 • Vinay Kumar Verma, Kevin Liang, Nikhil Mehta, Lawrence Carin
Zero-shot learning (ZSL) has been shown to be a promising approach to generalizing a model to categories unseen during training by leveraging class attributes, but challenges still remain.
no code implementations • NeurIPS 2020 • Pravendra Singh, Vinay Kumar Verma, Pratik Mazumder, Lawrence Carin, Piyush Rai
Further, our approach does not require storing data samples from the old tasks, which is done by many replay based methods.
no code implementations • 14 Nov 2020 • Vinay Kumar Verma, Ashish Mishra, Anubha Pandey, Hema A. Murthy, Piyush Rai
We present a meta-learning based generative model for zero-shot learning (ZSL) towards a challenging setting when the number of training examples from each \emph{seen} class is very few.
1 code implementation • 23 Jul 2020 • Anurag Roy, Vinay Kumar Verma, Kripabandhu Ghosh, Saptarshi Ghosh
Most existing algorithms for cross-modal Information Retrieval are based on a supervised train-test setup, where a model learns to align the mode of the query (e. g., text) to the mode of the documents (e. g., images) from a given training set.
no code implementations • 18 Jan 2020 • Anubha Pandey, Ashish Mishra, Vinay Kumar Verma, Anurag Mittal, Hema A. Murthy
Conventional approaches to Sketch-Based Image Retrieval (SBIR) assume that the data of all the classes are available during training.
no code implementations • 15 Jan 2020 • Vinay Kumar Verma, Pravendra Singh, Vinay P. Namboodiri, Piyush Rai
The pruner is essentially a multitask deep neural network with binary outputs that help identify the filters from each layer of the original network that do not have any significant contribution to the model and can therefore be pruned.
1 code implementation • 10 Sep 2019 • Vinay Kumar Verma, Dhanajit Brahma, Piyush Rai
Our proposed model yields significant improvements on standard ZSL as well as more challenging GZSL setting.
1 code implementation • 11 May 2019 • Pravendra Singh, Vinay Kumar Verma, Piyush Rai, Vinay P. Namboodiri
Our framework, called Play and Prune (PP), jointly prunes and fine-tunes CNN model parameters, with an adaptive pruning rate, while maintaining the model's predictive performance.
no code implementations • 18 Apr 2019 • Vinay Kumar Verma, Aakansha Mishra, Ashish Mishra, Piyush Rai
We present a probabilistic model for Sketch-Based Image Retrieval (SBIR) where, at retrieval time, we are given sketches from novel classes, that were not present at training time.
1 code implementation • CVPR 2019 • Pravendra Singh, Vinay Kumar Verma, Piyush Rai, Vinay P. Namboodiri
We present a novel deep learning architecture in which the convolution operation leverages heterogeneous kernels.
no code implementations • 26 Nov 2018 • Pravendra Singh, Vinay Kumar Verma, Piyush Rai, Vinay P. Namboodiri
We present a filter correlation based model compression approach for deep convolutional neural networks.
no code implementations • 27 Jan 2018 • Ashish Mishra, Vinay Kumar Verma, M Shiva Krishna Reddy, Arulkumar S, Piyush Rai, Anurag Mittal
In particular, we assume that the distribution parameters for any action class in the visual space can be expressed as a linear combination of a set of basis vectors where the combination weights are given by the attributes of the action class.
no code implementations • CVPR 2018 • Vinay Kumar Verma, Gundeep Arora, Ashish Mishra, Piyush Rai
Our model's ability to generate and leverage examples from unseen classes to train the classification model naturally helps to mitigate the bias towards predicting seen classes in generalized zero-shot learning settings.
no code implementations • 15 Nov 2017 • Wenlin Wang, Yunchen Pu, Vinay Kumar Verma, Kai Fan, Yizhe Zhang, Changyou Chen, Piyush Rai, Lawrence Carin
We present a deep generative model for learning to predict classes not seen at training time.
2 code implementations • 25 Jul 2017 • Vinay Kumar Verma, Piyush Rai
We model each class-conditional distribution as an exponential family distribution and the parameters of the distribution of each seen/unseen class are defined as functions of the respective observed class attributes.