Reconstruction Attack
22 papers with code • 0 benchmarks • 0 datasets
Facial reconstruction attack of facial manipulation models such as: Face swapping models, anonymization models, etc.
Benchmarks
These leaderboards are used to track progress in Reconstruction Attack
Most implemented papers
Reconstructing Training Data with Informed Adversaries
Our work provides an effective reconstruction attack that model developers can use to assess memorization of individual points in general settings beyond those considered in previous works (e. g. generative language models or access to training gradients); it shows that standard models have the capacity to store enough information to enable high-fidelity reconstruction of training data points; and it demonstrates that differential privacy can successfully mitigate such attacks in a parameter regime where utility degradation is minimal.
A Review of Anonymization for Healthcare Data
Mining health data can lead to faster medical decisions, improvement in the quality of treatment, disease prevention, reduced cost, and it drives innovative solutions within the healthcare sector.
Inference Attacks Against Graph Neural Networks
Second, given a subgraph of interest and the graph embedding, we can determine with high confidence that whether the subgraph is contained in the target graph.
Towards General Deep Leakage in Federated Learning
We find that image restoration fails even if there is only one incorrectly inferred label in the batch; we also find that when batch images have the same label, the corresponding image is restored as a fusion of that class of images.
When the Curious Abandon Honesty: Federated Learning Is Not Private
Instead, these devices share gradients, parameters, or other model updates, with a central party (e. g., a company) coordinating the training.
How Private Is Your RL Policy? An Inverse RL Based Analysis Framework
Reinforcement Learning (RL) enables agents to learn how to perform various tasks from scratch.
TabLeak: Tabular Data Leakage in Federated Learning
A successful attack for tabular data must address two key challenges unique to the domain: (i) obtaining a solution to a high-variance mixed discrete-continuous optimization problem, and (ii) enabling human assessment of the reconstruction as unlike for image and text data, direct human inspection is not possible.
Feature Reconstruction Attacks and Countermeasures of DNN training in Vertical Federated Learning
To address this problem, we develop a novel feature protection scheme against the reconstruction attack that effectively misleads the search to some pre-specified random values.
Confidence-Ranked Reconstruction of Census Microdata from Published Statistics
Our attacks significantly outperform those that are based only on access to a public distribution or population from which the private dataset $D$ was sampled, demonstrating that they are exploiting information in the aggregate statistics $Q(D)$, and not simply the overall structure of the distribution.
Vicious Classifiers: Assessing Inference-time Data Reconstruction Risk in Edge Computing
Privacy-preserving inference in edge computing paradigms encourages the users of machine-learning services to locally run a model on their private input and only share the models outputs for a target task with the server.