Search Results for author: Vincent Bindschaedler

Found 9 papers, 2 papers with code

SoK: Memorization in General-Purpose Large Language Models

no code implementations24 Oct 2023 Valentin Hartmann, Anshuman Suri, Vincent Bindschaedler, David Evans, Shruti Tople, Robert West

A major part of this success is due to their huge training datasets and the unprecedented number of model parameters, which allow them to memorize large amounts of information contained in the training data.

Memorization Question Answering

On the Importance of Architecture and Feature Selection in Differentially Private Machine Learning

no code implementations13 May 2022 Wenxuan Bao, Luke A. Bauer, Vincent Bindschaedler

The use of differentially private learning algorithms in a "drop-in" fashion -- without accounting for the impact of differential privacy (DP) noise when choosing what feature engineering operations to use, what features to select, or what neural network architecture to use -- yields overly complex and poorly performing models.

BIG-bench Machine Learning Feature Engineering +1

Covert Message Passing over Public Internet Platforms Using Model-Based Format-Transforming Encryption

no code implementations13 Oct 2021 Luke A. Bauer, James K. Howes IV, Sam A. Markelon, Vincent Bindschaedler, Thomas Shrimpton

We introduce a new type of format-transforming encryption where the format of ciphertexts is implicitly encoded within a machine-learned generative model.

Understanding Membership Inferences on Well-Generalized Learning Models

1 code implementation13 Feb 2018 Yunhui Long, Vincent Bindschaedler, Lei Wang, Diyue Bu, Xiao-Feng Wang, Haixu Tang, Carl A. Gunter, Kai Chen

Membership Inference Attack (MIA) determines the presence of a record in a machine learning model's training data by querying the model.

BIG-bench Machine Learning Inference Attack +1

Plausible Deniability for Privacy-Preserving Data Synthesis

no code implementations26 Aug 2017 Vincent Bindschaedler, Reza Shokri, Carl A. Gunter

We demonstrate the efficiency of this generative technique on a large dataset; it is shown to preserve the utility of original data with respect to various statistical analysis and machine learning measures.

De-identification Privacy Preserving

Cannot find the paper you are looking for? You can Submit a new open access paper.