Search Results for author: Pratik Mazumder

Found 14 papers, 0 papers with code

Hybrid Sample Synthesis-based Debiasing of Classifier in Limited Data Setting

no code implementations13 Dec 2023 Piyush Arora, Pratik Mazumder

In this paper, we focus on a more practical setting with no prior information about the bias.

Attaining Class-level Forgetting in Pretrained Model using Few Samples

no code implementations19 Oct 2022 Pravendra Singh, Pratik Mazumder, Mohammed Asad Karim

However, in the future, some classes may become restricted due to privacy/ethical concerns, and the restricted class knowledge has to be removed from the models that have been trained on them.

DILF-EN framework for Class-Incremental Learning

no code implementations23 Dec 2021 Mohammed Asad Karim, Indu Joshi, Pratik Mazumder, Pravendra Singh

We apply our proposed approach to state-of-the-art class-incremental learning methods and empirically show that our framework significantly improves the performance of these methods.

Class Incremental Learning Incremental Learning

Restricted Category Removal from Model Representations using Limited Data

no code implementations29 Sep 2021 Pratik Mazumder, Pravendra Singh, Mohammed Asad Karim

A naive solution is to simply train the model from scratch on the complete training data while leaving out the training samples from the restricted classes (FDR - full data retraining).

Fair Visual Recognition in Limited Data Regime using Self-Supervision and Self-Distillation

no code implementations30 Jun 2021 Pratik Mazumder, Pravendra Singh, Vinay P. Namboodiri

Our approach significantly improves their performance and further reduces the model biases in the limited data regime.

Rectification-based Knowledge Retention for Continual Learning

no code implementations CVPR 2021 Pravendra Singh, Pratik Mazumder, Piyush Rai, Vinay P. Namboodiri

Our proposed method uses weight rectifications and affine transformations in order to adapt the model to different tasks that arrive sequentially.

Continual Learning Generalized Zero-Shot Learning +1

Few-Shot Lifelong Learning

no code implementations1 Mar 2021 Pratik Mazumder, Pravendra Singh, Piyush Rai

Our method selects very few parameters from the model for training every new set of classes instead of training the full model.

Continual Learning

Calibrating CNNs for Lifelong Learning

no code implementations NeurIPS 2020 Pravendra Singh, Vinay Kumar Verma, Pratik Mazumder, Lawrence Carin, Piyush Rai

Further, our approach does not require storing data samples from the old tasks, which is done by many replay based methods.

Continual Learning

RNNP: A Robust Few-Shot Learning Approach

no code implementations22 Nov 2020 Pratik Mazumder, Pravendra Singh, Vinay P. Namboodiri

Our method relies on generating robust prototypes from a set of few examples.

Few-Shot Learning

Improving Few-Shot Learning using Composite Rotation based Auxiliary Task

no code implementations29 Jun 2020 Pratik Mazumder, Pravendra Singh, Vinay P. Namboodiri

We then simultaneously train for the composite rotation prediction task along with the original classification task, which forces the network to learn high-quality generic features that help improve the few-shot classification performance.

Classification Few-Shot Learning +1

Passive Batch Injection Training Technique: Boosting Network Performance by Injecting Mini-Batches from a different Data Distribution

no code implementations8 Jun 2020 Pravendra Singh, Pratik Mazumder, Vinay P. Namboodiri

Our proposed technique, namely Passive Batch Injection Training Technique (PBITT), even reduces the level of overfitting in networks that already use the standard techniques for reducing overfitting such as $L_2$ regularization and batch normalization, resulting in significant accuracy improvements.

object-detection Object Detection

CPWC: Contextual Point Wise Convolution for Object Recognition

no code implementations21 Oct 2019 Pratik Mazumder, Pravendra Singh, Vinay Namboodiri

We propose an alternative design for pointwise convolution, which uses spatial information from the input efficiently.

Object Object Recognition

Accuracy Booster: Performance Boosting using Feature Map Re-calibration

no code implementations11 Mar 2019 Pravendra Singh, Pratik Mazumder, Vinay P. Namboodiri

Recently researchers have tried to boost the performance of CNNs by re-calibrating the feature maps produced by these filters, e. g., Squeeze-and-Excitation Networks (SENets).

General Classification object-detection +1

Cannot find the paper you are looking for? You can Submit a new open access paper.