Search Results for author: Amartya Sanyal

Found 27 papers, 8 papers with code

Provable Privacy with Non-Private Pre-Processing

no code implementations19 Mar 2024 Yaxi Hu, Amartya Sanyal, Bernhard Schölkopf

When analysing Differentially Private (DP) machine learning pipelines, the potential privacy cost of data-dependent pre-processing is frequently overlooked in privacy accounting.

Imputation Quantization

On the Growth of Mistakes in Differentially Private Online Learning: A Lower Bound Perspective

no code implementations26 Feb 2024 Daniil Dmitriev, Kristóf Szabó, Amartya Sanyal

In this paper, we provide lower bounds for Differentially Private (DP) Online Learning algorithms.

Corrective Machine Unlearning

1 code implementation21 Feb 2024 Shashwat Goel, Ameya Prabhu, Philip Torr, Ponnurangam Kumaraguru, Amartya Sanyal

We hope our work spurs research towards developing better methods for corrective unlearning and offers practitioners a new strategy to handle data integrity challenges arising from web-scale training.

Machine Unlearning

Can semi-supervised learning use all the data effectively? A lower bound perspective

no code implementations NeurIPS 2023 Alexandru Ţifrea, Gizem Yüce, Amartya Sanyal, Fanny Yang

Prior works have shown that semi-supervised learning algorithms can leverage unlabeled data to improve over the labeled sample complexity of supervised learning (SL) algorithms.

How robust accuracy suffers from certified training with convex relaxations

no code implementations12 Jun 2023 Piersilvio De Bartolomeis, Jacob Clarysse, Amartya Sanyal, Fanny Yang

In this paper, we systematically compare the standard and robust error of these two robust training paradigms across multiple computer vision tasks.

PILLAR: How to make semi-private learning more effective

1 code implementation6 Jun 2023 Francesco Pinto, Yaxi Hu, Fanny Yang, Amartya Sanyal

In Semi-Supervised Semi-Private (SP) learning, the learner has access to both public unlabelled and private labelled data.

Certifying Ensembles: A General Certification Theory with S-Lipschitzness

no code implementations25 Apr 2023 Aleksandar Petrov, Francisco Eiras, Amartya Sanyal, Philip H. S. Torr, Adel Bibi

Improving and guaranteeing the robustness of deep learning models has been a topic of intense research.

Do you pay for Privacy in Online learning?

no code implementations10 Oct 2022 Amartya Sanyal, Giorgia Ramponi

Online learning, in the mistake bound model, is one of the most fundamental concepts in learning theory.

Learning Theory

A law of adversarial risk, interpolation, and label noise

no code implementations8 Jul 2022 Daniel Paleka, Amartya Sanyal

In supervised learning, it has been shown that label noise in the data can be interpolated without penalties on test accuracy.

Inductive Bias

How Robust is Unsupervised Representation Learning to Distribution Shift?

no code implementations17 Jun 2022 Yuge Shi, Imant Daunhawer, Julia E. Vogt, Philip H. S. Torr, Amartya Sanyal

As such, there is a lack of insight on the robustness of the representations learned from unsupervised methods, such as self-supervised learning (SSL) and auto-encoder based algorithms (AE), to distribution shift.

Representation Learning Self-Supervised Learning

Catastrophic overfitting can be induced with discriminative non-robust features

1 code implementation16 Jun 2022 Guillermo Ortiz-Jiménez, Pau de Jorge, Amartya Sanyal, Adel Bibi, Puneet K. Dokania, Pascal Frossard, Gregory Rogéz, Philip H. S. Torr

Through extensive experiments we analyze this novel phenomenon and discover that the presence of these easy features induces a learning shortcut that leads to CO. Our findings provide new insights into the mechanisms of CO and improve our understanding of the dynamics of AT.

Robust classification

How unfair is private learning ?

no code implementations8 Jun 2022 Amartya Sanyal, Yaxi Hu, Fanny Yang

As machine learning algorithms are deployed on sensitive data in critical decision making processes, it is becoming increasingly important that they are also private and fair.

Decision Making Fairness

Make Some Noise: Reliable and Efficient Single-Step Adversarial Training

1 code implementation2 Feb 2022 Pau de Jorge, Adel Bibi, Riccardo Volpi, Amartya Sanyal, Philip H. S. Torr, Grégory Rogez, Puneet K. Dokania

Recently, Wong et al. showed that adversarial training with single-step FGSM leads to a characteristic failure mode named Catastrophic Overfitting (CO), in which a model becomes suddenly vulnerable to multi-step attacks.

Towards Adversarial Evaluations for Inexact Machine Unlearning

3 code implementations17 Jan 2022 Shashwat Goel, Ameya Prabhu, Amartya Sanyal, Ser-Nam Lim, Philip Torr, Ponnurangam Kumaraguru

Machine Learning models face increased concerns regarding the storage of personal user data and adverse impacts of corrupted data like backdoors or systematic bias.

Machine Unlearning Memorization

Towards fast and effective single-step adversarial training

no code implementations29 Sep 2021 Pau de Jorge, Adel Bibi, Riccardo Volpi, Amartya Sanyal, Philip Torr, Grégory Rogez, Puneet K. Dokania

In this work, we methodically revisit the role of noise and clipping in single-step adversarial training.

Identifying and Exploiting Structures for Reliable Deep Learning

no code implementations16 Aug 2021 Amartya Sanyal

Deep learning research has recently witnessed an impressively fast-paced progress in a wide range of tasks including computer vision, natural language processing, and reinforcement learning.

How Benign is Benign Overfitting ?

no code implementations ICLR 2021 Amartya Sanyal, Puneet K. Dokania, Varun Kanade, Philip Torr

We investigate two causes for adversarial vulnerability in deep neural networks: bad data and (poorly) trained models.

Adversarial Robustness Representation Learning

How benign is benign overfitting?

no code implementations8 Jul 2020 Amartya Sanyal, Puneet K. Dokania, Varun Kanade, Philip H. S. Torr

We investigate two causes for adversarial vulnerability in deep neural networks: bad data and (poorly) trained models.

Adversarial Robustness Representation Learning

Progressive Skeletonization: Trimming more fat from a network at initialization

1 code implementation ICLR 2021 Pau de Jorge, Amartya Sanyal, Harkirat S. Behl, Philip H. S. Torr, Gregory Rogez, Puneet K. Dokania

Recent studies have shown that skeletonization (pruning parameters) of networks \textit{at initialization} provides all the practical benefits of sparsity both at inference and training time, while only marginally degrading their performance.

Calibrating Deep Neural Networks using Focal Loss

2 code implementations NeurIPS 2020 Jishnu Mukhoti, Viveka Kulharia, Amartya Sanyal, Stuart Golodetz, Philip H. S. Torr, Puneet K. Dokania

To facilitate the use of focal loss in practice, we also provide a principled approach to automatically select the hyperparameter involved in the loss function.

The Intriguing Effects of Focal Loss on the Calibration of Deep Neural Networks

no code implementations25 Sep 2019 Jishnu Mukhoti, Viveka Kulharia, Amartya Sanyal, Stuart Golodetz, Philip Torr, Puneet Dokania

When combined with temperature scaling, focal loss, whilst preserving accuracy and yielding state-of-the-art calibrated models, also preserves the confidence of the model's correct predictions, which is extremely desirable for downstream tasks.

Stable Rank Normalization for Improved Generalization in Neural Networks and GANs

no code implementations ICLR 2020 Amartya Sanyal, Philip H. S. Torr, Puneet K. Dokania

Exciting new work on the generalization bounds for neural networks (NN) given by Neyshabur et al. , Bartlett et al. closely depend on two parameter-depenedent quantities: the Lipschitz constant upper-bound and the stable rank (a softer version of the rank operator).

Generalization Bounds Image Generation +1

TAPAS: Tricks to Accelerate (encrypted) Prediction As a Service

1 code implementation ICML 2018 Amartya Sanyal, Matt J. Kusner, Adrià Gascón, Varun Kanade

The main drawback of using fully homomorphic encryption is the amount of time required to evaluate large machine learning models on encrypted data.

BIG-bench Machine Learning Binarization

Robustness via Deep Low-Rank Representations

no code implementations ICLR 2019 Amartya Sanyal, Varun Kanade, Philip H. S. Torr, Puneet K. Dokania

To achieve low dimensionality of learned representations, we propose an easy-to-use, end-to-end trainable, low-rank regularizer (LR) that can be applied to any intermediate layer representation of a DNN.

Clustering General Classification +2

Optimizing Non-decomposable Measures with Deep Networks

no code implementations31 Jan 2018 Amartya Sanyal, Pawan Kumar, Purushottam Kar, Sanjay Chawla, Fabrizio Sebastiani

We present a class of algorithms capable of directly training deep neural networks with respect to large families of task-specific performance measures such as the F-measure and the Kullback-Leibler divergence that are structured and non-decomposable.

Agent based simulation of the evolution of society as an alternate maximization problem

no code implementations5 Jul 2017 Amartya Sanyal, Sanjana Garg, Asim Unmesh

Understanding the evolution of human society, as a complex adaptive system, is a task that has been looked upon from various angles.

Cannot find the paper you are looking for? You can Submit a new open access paper.