2 code implementations • 16 Jul 2020 • Vale Tolpegin, Stacey Truex, Mehmet Emre Gursoy, Ling Liu
Federated learning (FL) is an emerging paradigm for distributed training of large-scale deep neural networks in which participants' data remains on their own devices with only model updates being shared with a central server.
1 code implementation • 11 Jul 2020 • Ka-Ho Chow, Ling Liu, Mehmet Emre Gursoy, Stacey Truex, Wenqi Wei, Yanzhao Wu
We demonstrate that the proposed framework can serve as a methodical benchmark for analyzing adversarial behaviors and risks in real-time object detection systems.
no code implementations • 5 Jun 2020 • Stacey Truex, Ling Liu, Ka-Ho Chow, Mehmet Emre Gursoy, Wenqi Wei
However, in federated learning model parameter updates are collected iteratively from each participant and consist of high dimensional, continuous values with high precision (10s of digits after the decimal point), making existing LDP protocols inapplicable.
2 code implementations • 22 Apr 2020 • Wenqi Wei, Ling Liu, Margaret Loper, Ka-Ho Chow, Mehmet Emre Gursoy, Stacey Truex, Yanzhao Wu
FL offers default client privacy by allowing clients to keep their sensitive data on local devices and to only share local training parameter updates with the federated server.
2 code implementations • 9 Apr 2020 • Ka-Ho Chow, Ling Liu, Mehmet Emre Gursoy, Stacey Truex, Wenqi Wei, Yanzhao Wu
The rapid growth of real-time huge data capturing has pushed the deep learning and data analytic computing to the edge systems.
no code implementations • 25 Jan 2020 • Zheng Chai, Ahsan Ali, Syed Zawad, Stacey Truex, Ali Anwar, Nathalie Baracaldo, Yi Zhou, Heiko Ludwig, Feng Yan, Yue Cheng
To this end, we propose TiFL, a Tier-based Federated Learning System, which divides clients into tiers based on their training performance and selects clients from the same tier in each training round to mitigate the straggler problem caused by heterogeneity in resource and data quantity.
no code implementations • 21 Nov 2019 • Stacey Truex, Ling Liu, Mehmet Emre Gursoy, Wenqi Wei, Lei Yu
Second, through MPLens, we highlight how the vulnerability of pre-trained models under membership inference attack is not uniform across all classes, particularly when the training data itself is skewed.
no code implementations • 1 Oct 2019 • Wenqi Wei, Ling Liu, Margaret Loper, Ka-Ho Chow, Emre Gursoy, Stacey Truex, Yanzhao Wu
Deep neural network (DNN) has demonstrated its success in multiple domains.
no code implementations • 29 Aug 2019 • Ling Liu, Wenqi Wei, Ka-Ho Chow, Margaret Loper, Emre Gursoy, Stacey Truex, Yanzhao Wu
In this paper we first give an overview of the concept of ensemble diversity and examine the three types of ensemble diversity in the context of DNN classifiers.
no code implementations • 15 May 2019 • Mehmet Emre Gursoy, Acar Tamersoy, Stacey Truex, Wenqi Wei, Ling Liu
In this paper, we address the small user population problem by introducing the concept of Condensed Local Differential Privacy (CLDP) as a specialization of LDP, and develop a suite of CLDP protocols that offer desirable statistical utility while preserving privacy.
Cryptography and Security Databases
no code implementations • 3 Apr 2019 • Lei Yu, Ling Liu, Calton Pu, Mehmet Emre Gursoy, Stacey Truex
However, when the training datasets are crowdsourced from individuals and contain sensitive information, the model parameters may encode private information and bear the risks of privacy leakage.
1 code implementation • 7 Dec 2018 • Stacey Truex, Nathalie Baracaldo, Ali Anwar, Thomas Steinke, Heiko Ludwig, Rui Zhang, Yi Zhou
Federated learning facilitates the collaborative training of models without the sharing of raw data.
no code implementations • 29 Jun 2018 • Wenqi Wei, Ling Liu, Margaret Loper, Stacey Truex, Lei Yu, Mehmet Emre Gursoy, Yanzhao Wu
The burgeoning success of deep learning has raised the security and privacy concerns as more and more tasks are accompanied with sensitive data.
1 code implementation • 28 Jun 2018 • Stacey Truex, Ling Liu, Mehmet Emre Gursoy, Lei Yu, Wenqi Wei
Our empirical results additionally show that (1) using the type of target model under attack within the attack model may not increase attack effectiveness and (2) collaborative learning in federated systems exposes vulnerabilities to membership inference risks when the adversary is a participant in the federation.
Cryptography and Security