Search Results for author: Hengrui Jia

Found 8 papers, 3 papers with code

Gradients Look Alike: Sensitivity is Often Overestimated in DP-SGD

no code implementations1 Jul 2023 Anvith Thudi, Hengrui Jia, Casey Meehan, Ilia Shumailov, Nicolas Papernot

Put all together, our evaluation shows that this novel DP-SGD analysis allows us to now formally show that DP-SGD leaks significantly less privacy for many datapoints (when trained on common benchmarks) than the current data-independent guarantee.

Proof-of-Learning is Currently More Broken Than You Think

no code implementations6 Aug 2022 Congyu Fang, Hengrui Jia, Anvith Thudi, Mohammad Yaghini, Christopher A. Choquette-Choo, Natalie Dullerud, Varun Chandrasekaran, Nicolas Papernot

They empirically argued the benefit of this approach by showing how spoofing--computing a proof for a stolen model--is as expensive as obtaining the proof honestly by training the model.

Learning Theory

On the Necessity of Auditable Algorithmic Definitions for Machine Unlearning

no code implementations22 Oct 2021 Anvith Thudi, Hengrui Jia, Ilia Shumailov, Nicolas Papernot

Machine unlearning, i. e. having a model forget about some of its training data, has become increasingly more important as privacy legislation promotes variants of the right-to-be-forgotten.

Machine Unlearning

A Zest of LIME: Towards Architecture-Independent Model Distances

no code implementations ICLR 2022 Hengrui Jia, Hongyu Chen, Jonas Guan, Ali Shahin Shamsabadi, Nicolas Papernot

In this paper, we instead propose to compute distance between black-box models by comparing their Local Interpretable Model-Agnostic Explanations (LIME).

Machine Unlearning

SoK: Machine Learning Governance

no code implementations20 Sep 2021 Varun Chandrasekaran, Hengrui Jia, Anvith Thudi, Adelin Travers, Mohammad Yaghini, Nicolas Papernot

The application of machine learning (ML) in computer systems introduces not only many benefits but also risks to society.

BIG-bench Machine Learning

Proof-of-Learning: Definitions and Practice

2 code implementations9 Mar 2021 Hengrui Jia, Mohammad Yaghini, Christopher A. Choquette-Choo, Natalie Dullerud, Anvith Thudi, Varun Chandrasekaran, Nicolas Papernot

In particular, our analyses and experiments show that an adversary seeking to illegitimately manufacture a proof-of-learning needs to perform *at least* as much work than is needed for gradient descent itself.

Entangled Watermarks as a Defense against Model Extraction

3 code implementations27 Feb 2020 Hengrui Jia, Christopher A. Choquette-Choo, Varun Chandrasekaran, Nicolas Papernot

Such pairs are watermarks, which are not sampled from the task distribution and are only known to the defender.

Model extraction Transfer Learning

Machine Unlearning

2 code implementations9 Dec 2019 Lucas Bourtoule, Varun Chandrasekaran, Christopher A. Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, Nicolas Papernot

Once users have shared their data online, it is generally difficult for them to revoke access and ask for the data to be deleted.

Machine Unlearning Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.