Search Results for author: Stacey Truex

Found 14 papers, 6 papers with code

Data Poisoning Attacks Against Federated Learning Systems

2 code implementations16 Jul 2020 Vale Tolpegin, Stacey Truex, Mehmet Emre Gursoy, Ling Liu

Federated learning (FL) is an emerging paradigm for distributed training of large-scale deep neural networks in which participants' data remains on their own devices with only model updates being shared with a central server.

Data Poisoning Federated Learning

Understanding Object Detection Through An Adversarial Lens

1 code implementation11 Jul 2020 Ka-Ho Chow, Ling Liu, Mehmet Emre Gursoy, Stacey Truex, Wenqi Wei, Yanzhao Wu

We demonstrate that the proposed framework can serve as a methodical benchmark for analyzing adversarial behaviors and risks in real-time object detection systems.

Adversarial Robustness Autonomous Vehicles +3

LDP-Fed: Federated Learning with Local Differential Privacy

no code implementations5 Jun 2020 Stacey Truex, Ling Liu, Ka-Ho Chow, Mehmet Emre Gursoy, Wenqi Wei

However, in federated learning model parameter updates are collected iteratively from each participant and consist of high dimensional, continuous values with high precision (10s of digits after the decimal point), making existing LDP protocols inapplicable.

Federated Learning

A Framework for Evaluating Gradient Leakage Attacks in Federated Learning

2 code implementations22 Apr 2020 Wenqi Wei, Ling Liu, Margaret Loper, Ka-Ho Chow, Mehmet Emre Gursoy, Stacey Truex, Yanzhao Wu

FL offers default client privacy by allowing clients to keep their sensitive data on local devices and to only share local training parameter updates with the federated server.

Federated Learning

TOG: Targeted Adversarial Objectness Gradient Attacks on Real-time Object Detection Systems

2 code implementations9 Apr 2020 Ka-Ho Chow, Ling Liu, Mehmet Emre Gursoy, Stacey Truex, Wenqi Wei, Yanzhao Wu

The rapid growth of real-time huge data capturing has pushed the deep learning and data analytic computing to the edge systems.

Autonomous Driving Object +4

TiFL: A Tier-based Federated Learning System

no code implementations25 Jan 2020 Zheng Chai, Ahsan Ali, Syed Zawad, Stacey Truex, Ali Anwar, Nathalie Baracaldo, Yi Zhou, Heiko Ludwig, Feng Yan, Yue Cheng

To this end, we propose TiFL, a Tier-based Federated Learning System, which divides clients into tiers based on their training performance and selects clients from the same tier in each training round to mitigate the straggler problem caused by heterogeneity in resource and data quantity.

Federated Learning

Effects of Differential Privacy and Data Skewness on Membership Inference Vulnerability

no code implementations21 Nov 2019 Stacey Truex, Ling Liu, Mehmet Emre Gursoy, Wenqi Wei, Lei Yu

Second, through MPLens, we highlight how the vulnerability of pre-trained models under membership inference attack is not uniform across all classes, particularly when the training data itself is skewed.

Inference Attack Membership Inference Attack

Deep Neural Network Ensembles against Deception: Ensemble Diversity, Accuracy and Robustness

no code implementations29 Aug 2019 Ling Liu, Wenqi Wei, Ka-Ho Chow, Margaret Loper, Emre Gursoy, Stacey Truex, Yanzhao Wu

In this paper we first give an overview of the concept of ensemble diversity and examine the three types of ensemble diversity in the context of DNN classifiers.

Ensemble Learning

Secure and Utility-Aware Data Collection with Condensed Local Differential Privacy

no code implementations15 May 2019 Mehmet Emre Gursoy, Acar Tamersoy, Stacey Truex, Wenqi Wei, Ling Liu

In this paper, we address the small user population problem by introducing the concept of Condensed Local Differential Privacy (CLDP) as a specialization of LDP, and develop a suite of CLDP protocols that offer desirable statistical utility while preserving privacy.

Cryptography and Security Databases

Differentially Private Model Publishing for Deep Learning

no code implementations3 Apr 2019 Lei Yu, Ling Liu, Calton Pu, Mehmet Emre Gursoy, Stacey Truex

However, when the training datasets are crowdsourced from individuals and contain sensitive information, the model parameters may encode private information and bear the risks of privacy leakage.

Adversarial Examples in Deep Learning: Characterization and Divergence

no code implementations29 Jun 2018 Wenqi Wei, Ling Liu, Margaret Loper, Stacey Truex, Lei Yu, Mehmet Emre Gursoy, Yanzhao Wu

The burgeoning success of deep learning has raised the security and privacy concerns as more and more tasks are accompanied with sensitive data.

Adversarial Attack

Towards Demystifying Membership Inference Attacks

1 code implementation28 Jun 2018 Stacey Truex, Ling Liu, Mehmet Emre Gursoy, Lei Yu, Wenqi Wei

Our empirical results additionally show that (1) using the type of target model under attack within the attack model may not increase attack effectiveness and (2) collaborative learning in federated systems exposes vulnerabilities to membership inference risks when the adversary is a participant in the federation.

Cryptography and Security

Cannot find the paper you are looking for? You can Submit a new open access paper.