Search Results for author: Mehmet Emre Gursoy

Found 11 papers, 5 papers with code

Utility-Optimized Synthesis of Differentially Private Location Traces

no code implementations14 Sep 2020 Mehmet Emre Gursoy, Vivekanand Rajasekar, Ling Liu

Given a real trace dataset D, the differential privacy parameter epsilon controlling the strength of privacy protection, and the utility/error metric Err of interest; OptaTrace uses Bayesian optimization to optimize DPLTS such that the output error (measured in terms of given metric Err) is minimized while epsilon-differential privacy is satisfied.

Bayesian Optimization Benchmarking

Data Poisoning Attacks Against Federated Learning Systems

2 code implementations16 Jul 2020 Vale Tolpegin, Stacey Truex, Mehmet Emre Gursoy, Ling Liu

Federated learning (FL) is an emerging paradigm for distributed training of large-scale deep neural networks in which participants' data remains on their own devices with only model updates being shared with a central server.

Data Poisoning Federated Learning

Understanding Object Detection Through An Adversarial Lens

1 code implementation11 Jul 2020 Ka-Ho Chow, Ling Liu, Mehmet Emre Gursoy, Stacey Truex, Wenqi Wei, Yanzhao Wu

We demonstrate that the proposed framework can serve as a methodical benchmark for analyzing adversarial behaviors and risks in real-time object detection systems.

Adversarial Robustness Autonomous Vehicles +3

LDP-Fed: Federated Learning with Local Differential Privacy

no code implementations5 Jun 2020 Stacey Truex, Ling Liu, Ka-Ho Chow, Mehmet Emre Gursoy, Wenqi Wei

However, in federated learning model parameter updates are collected iteratively from each participant and consist of high dimensional, continuous values with high precision (10s of digits after the decimal point), making existing LDP protocols inapplicable.

Federated Learning

A Framework for Evaluating Gradient Leakage Attacks in Federated Learning

2 code implementations22 Apr 2020 Wenqi Wei, Ling Liu, Margaret Loper, Ka-Ho Chow, Mehmet Emre Gursoy, Stacey Truex, Yanzhao Wu

FL offers default client privacy by allowing clients to keep their sensitive data on local devices and to only share local training parameter updates with the federated server.

Federated Learning

TOG: Targeted Adversarial Objectness Gradient Attacks on Real-time Object Detection Systems

2 code implementations9 Apr 2020 Ka-Ho Chow, Ling Liu, Mehmet Emre Gursoy, Stacey Truex, Wenqi Wei, Yanzhao Wu

The rapid growth of real-time huge data capturing has pushed the deep learning and data analytic computing to the edge systems.

Autonomous Driving Object +4

Effects of Differential Privacy and Data Skewness on Membership Inference Vulnerability

no code implementations21 Nov 2019 Stacey Truex, Ling Liu, Mehmet Emre Gursoy, Wenqi Wei, Lei Yu

Second, through MPLens, we highlight how the vulnerability of pre-trained models under membership inference attack is not uniform across all classes, particularly when the training data itself is skewed.

Inference Attack Membership Inference Attack

Secure and Utility-Aware Data Collection with Condensed Local Differential Privacy

no code implementations15 May 2019 Mehmet Emre Gursoy, Acar Tamersoy, Stacey Truex, Wenqi Wei, Ling Liu

In this paper, we address the small user population problem by introducing the concept of Condensed Local Differential Privacy (CLDP) as a specialization of LDP, and develop a suite of CLDP protocols that offer desirable statistical utility while preserving privacy.

Cryptography and Security Databases

Differentially Private Model Publishing for Deep Learning

no code implementations3 Apr 2019 Lei Yu, Ling Liu, Calton Pu, Mehmet Emre Gursoy, Stacey Truex

However, when the training datasets are crowdsourced from individuals and contain sensitive information, the model parameters may encode private information and bear the risks of privacy leakage.

Adversarial Examples in Deep Learning: Characterization and Divergence

no code implementations29 Jun 2018 Wenqi Wei, Ling Liu, Margaret Loper, Stacey Truex, Lei Yu, Mehmet Emre Gursoy, Yanzhao Wu

The burgeoning success of deep learning has raised the security and privacy concerns as more and more tasks are accompanied with sensitive data.

Adversarial Attack

Towards Demystifying Membership Inference Attacks

1 code implementation28 Jun 2018 Stacey Truex, Ling Liu, Mehmet Emre Gursoy, Lei Yu, Wenqi Wei

Our empirical results additionally show that (1) using the type of target model under attack within the attack model may not increase attack effectiveness and (2) collaborative learning in federated systems exposes vulnerabilities to membership inference risks when the adversary is a participant in the federation.

Cryptography and Security

Cannot find the paper you are looking for? You can Submit a new open access paper.