Reconstruction Attack

22 papers with code • 0 benchmarks • 0 datasets

Facial reconstruction attack of facial manipulation models such as: Face swapping models, anonymization models, etc.

Most implemented papers

Reconstructing Training Data with Informed Adversaries

deepmind/informed_adversary_mnist_reconstruction 13 Jan 2022

Our work provides an effective reconstruction attack that model developers can use to assess memorization of individual points in general settings beyond those considered in previous works (e. g. generative language models or access to training gradients); it shows that standard models have the capacity to store enough information to enable high-fidelity reconstruction of training data points; and it demonstrates that differential privacy can successfully mitigate such attacks in a parameter regime where utility degradation is minimal.

A Review of Anonymization for Healthcare Data

iyempissy/anonymization-reconstruction-attack 13 Apr 2021

Mining health data can lead to faster medical decisions, improvement in the quality of treatment, disease prevention, reduced cost, and it drives innovative solutions within the healthcare sector.

Inference Attacks Against Graph Neural Networks

zhangzhk0819/gnn-embedding-leaks 6 Oct 2021

Second, given a subgraph of interest and the graph embedding, we can determine with high confidence that whether the subgraph is contained in the target graph.

Towards General Deep Leakage in Federated Learning

hangyuzhu/leakage-attack-in-federated-learning 18 Oct 2021

We find that image restoration fails even if there is only one incorrectly inferred label in the batch; we also find that when batch images have the same label, the corresponding image is restored as a fusion of that class of images.

When the Curious Abandon Honesty: Federated Learning Is Not Private

JonasGeiping/breaching 6 Dec 2021

Instead, these devices share gradients, parameters, or other model updates, with a central party (e. g., a company) coordinating the training.

How Private Is Your RL Policy? An Inverse RL Based Analysis Framework

magnetar-iiith/pril 10 Dec 2021

Reinforcement Learning (RL) enables agents to learn how to perform various tasks from scratch.

TabLeak: Tabular Data Leakage in Federated Learning

eth-sri/tableak 4 Oct 2022

A successful attack for tabular data must address two key challenges unique to the domain: (i) obtaining a solution to a high-variance mixed discrete-continuous optimization problem, and (ii) enabling human assessment of the reconstruction as unlike for image and text data, direct human inspection is not possible.

Feature Reconstruction Attacks and Countermeasures of DNN training in Vertical Federated Learning

isdkfj/binary-attack 13 Oct 2022

To address this problem, we develop a novel feature protection scheme against the reconstruction attack that effectively misleads the search to some pre-specified random values.

Confidence-Ranked Reconstruction of Census Microdata from Published Statistics

terranceliu/rap-rank-reconstruction 6 Nov 2022

Our attacks significantly outperform those that are based only on access to a public distribution or population from which the private dataset $D$ was sampled, demonstrating that they are exploiting information in the aggregate statistics $Q(D)$, and not simply the overall structure of the distribution.

Vicious Classifiers: Assessing Inference-time Data Reconstruction Risk in Edge Computing

mmalekzadeh/vicious-classifiers 8 Dec 2022

Privacy-preserving inference in edge computing paradigms encourages the users of machine-learning services to locally run a model on their private input and only share the models outputs for a target task with the server.