no code implementations • 31 Jul 2023 • Kousik Rajesh, Mrigank Raman, Mohammed Asad Karim, Pranit Chawla
In recent times there has been a surge of multi-modal architectures based on Large Language Models, which leverage the zero shot generation capabilities of LLMs and project image embeddings into the text space and then use the auto-regressive capacity to solve tasks such as VQA, captioning, and image retrieval.
no code implementations • 19 Oct 2022 • Pravendra Singh, Pratik Mazumder, Mohammed Asad Karim
However, in the future, some classes may become restricted due to privacy/ethical concerns, and the restricted class knowledge has to be removed from the models that have been trained on them.
no code implementations • 23 Dec 2021 • Mohammed Asad Karim, Indu Joshi, Pratik Mazumder, Pravendra Singh
We apply our proposed approach to state-of-the-art class-incremental learning methods and empirically show that our framework significantly improves the performance of these methods.
no code implementations • 29 Sep 2021 • Pratik Mazumder, Pravendra Singh, Mohammed Asad Karim
A naive solution is to simply train the model from scratch on the complete training data while leaving out the training samples from the restricted classes (FDR - full data retraining).
no code implementations • 12 Jun 2021 • Mohammed Asad Karim, Vinay Kumar Verma, Pravendra Singh, Vinay Namboodiri, Piyush Rai
In our approach, we learn robust representations that are generalizable across tasks without suffering from the problems of catastrophic forgetting and overfitting to accommodate future classes with limited samples.