no code implementations • 13 Dec 2023 • Piyush Arora, Pratik Mazumder
In this paper, we focus on a more practical setting with no prior information about the bias.
no code implementations • 19 Oct 2022 • Pravendra Singh, Pratik Mazumder, Mohammed Asad Karim
However, in the future, some classes may become restricted due to privacy/ethical concerns, and the restricted class knowledge has to be removed from the models that have been trained on them.
no code implementations • 23 Dec 2021 • Mohammed Asad Karim, Indu Joshi, Pratik Mazumder, Pravendra Singh
We apply our proposed approach to state-of-the-art class-incremental learning methods and empirically show that our framework significantly improves the performance of these methods.
no code implementations • 29 Sep 2021 • Pratik Mazumder, Pravendra Singh, Mohammed Asad Karim
A naive solution is to simply train the model from scratch on the complete training data while leaving out the training samples from the restricted classes (FDR - full data retraining).
no code implementations • 30 Jun 2021 • Pratik Mazumder, Pravendra Singh, Vinay P. Namboodiri
Our approach significantly improves their performance and further reduces the model biases in the limited data regime.
no code implementations • CVPR 2021 • Pravendra Singh, Pratik Mazumder, Piyush Rai, Vinay P. Namboodiri
Our proposed method uses weight rectifications and affine transformations in order to adapt the model to different tasks that arrive sequentially.
no code implementations • 1 Mar 2021 • Pratik Mazumder, Pravendra Singh, Piyush Rai
Our method selects very few parameters from the model for training every new set of classes instead of training the full model.
no code implementations • NeurIPS 2020 • Pravendra Singh, Vinay Kumar Verma, Pratik Mazumder, Lawrence Carin, Piyush Rai
Further, our approach does not require storing data samples from the old tasks, which is done by many replay based methods.
no code implementations • 22 Nov 2020 • Pratik Mazumder, Pravendra Singh, Vinay P. Namboodiri
Our method relies on generating robust prototypes from a set of few examples.
no code implementations • 29 Jun 2020 • Pratik Mazumder, Pravendra Singh, Vinay P. Namboodiri
We then simultaneously train for the composite rotation prediction task along with the original classification task, which forces the network to learn high-quality generic features that help improve the few-shot classification performance.
no code implementations • 8 Jun 2020 • Pravendra Singh, Pratik Mazumder, Vinay P. Namboodiri
Our proposed technique, namely Passive Batch Injection Training Technique (PBITT), even reduces the level of overfitting in networks that already use the standard techniques for reducing overfitting such as $L_2$ regularization and batch normalization, resulting in significant accuracy improvements.
no code implementations • 27 May 2020 • Pratik Mazumder, Pravendra Singh, Kranti Kumar Parida, Vinay P. Namboodiri
We use the semantic relatedness of text embeddings as a means for zero-shot learning by aligning audio and video embeddings with the corresponding class label text feature space.
Ranked #6 on GZSL Video Classification on ActivityNet-GZSL(main)
no code implementations • 21 Oct 2019 • Pratik Mazumder, Pravendra Singh, Vinay Namboodiri
We propose an alternative design for pointwise convolution, which uses spatial information from the input efficiently.
no code implementations • 11 Mar 2019 • Pravendra Singh, Pratik Mazumder, Vinay P. Namboodiri
Recently researchers have tried to boost the performance of CNNs by re-calibrating the feature maps produced by these filters, e. g., Squeeze-and-Excitation Networks (SENets).