Search Results for author: Gobinda Saha

Found 9 papers, 6 papers with code

Verifix: Post-Training Correction to Improve Label Noise Robustness with Verified Samples

no code implementations13 Mar 2024 Sangamesh Kodge, Deepak Ravikumar, Gobinda Saha, Kaushik Roy

We introduce Verifix, a novel Singular Value Decomposition (SVD) based algorithm that leverages a small, verified dataset to correct the model weights using a single update.

Deep Unlearning: Fast and Efficient Training-free Approach to Controlled Forgetting

1 code implementation1 Dec 2023 Sangamesh Kodge, Gobinda Saha, Kaushik Roy

We demonstrate our algorithm's efficacy on ImageNet using a Vision Transformer with only $\sim$1. 5% drop in retain accuracy compared to the original model while maintaining under 1% accuracy on the unlearned class samples.

Image Classification Machine Unlearning

Continual Learning with Scaled Gradient Projection

1 code implementation2 Feb 2023 Gobinda Saha, Kaushik Roy

In neural networks, continual learning results in gradient interference among sequential tasks, leading to catastrophic forgetting of old tasks while learning new ones.

Continual Learning Image Classification

Synthetic Dataset Generation for Privacy-Preserving Machine Learning

no code implementations6 Oct 2022 Efstathia Soufleri, Gobinda Saha, Kaushik Roy

We evaluate our method on image classification dataset (CIFAR10) and show that our synthetic data can be used for training networks from scratch, producing reasonable classification performance.

Image Classification Memorization +5

Saliency Guided Experience Packing for Replay in Continual Learning

1 code implementation10 Sep 2021 Gobinda Saha, Kaushik Roy

One way to enable such learning is to store past experiences in the form of input examples in episodic memory and replay them when learning new tasks.

Continual Learning Image Classification

Gradient Projection Memory for Continual Learning

1 code implementation ICLR 2021 Gobinda Saha, Isha Garg, Kaushik Roy

The ability to learn continually without forgetting the past tasks is a desired attribute for artificial learning systems.

Attribute Continual Learning +1

SPACE: Structured Compression and Sharing of Representational Space for Continual Learning

1 code implementation23 Jan 2020 Gobinda Saha, Isha Garg, Aayush Ankit, Kaushik Roy

A minimal number of extra dimensions required to explain the current task are added to the Core space and the remaining Residual is freed up for learning the next task.

Continual Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.