Search Results for author: Lingjuan Lyu

Found 25 papers, 7 papers with code

Exploiting Data Sparsity in Secure Cross-Platform Social Recommendation

no code implementations NeurIPS 2021 Jinming Cui, Chaochao Chen, Lingjuan Lyu, Carl Yang, Wang Li

As a result, our model can not only improve the recommendation performance of the rating platform by incorporating the sparse social data on the social platform, but also protect data privacy of both platforms.

Information Retrieval

Gradient Driven Rewards to Guarantee Fairness in Collaborative Machine Learning

no code implementations NeurIPS 2021 Xinyi Xu, Lingjuan Lyu, Xingjun Ma, Chenglin Miao, Chuan Sheng Foo, Bryan Kian Hsiang Low

In this paper, we adopt federated learning as a gradient-based formalization of collaborative machine learning, propose a novel cosine gradient Shapley value to evaluate the agents’ uploaded model parameter updates/gradients, and design theoretically guaranteed fair rewards in the form of better model performance.

Fairness Federated Learning

Anti-Backdoor Learning: Training Clean Models on Poisoned Data

1 code implementation NeurIPS 2021 Yige Li, Xixiang Lyu, Nodens Koren, Lingjuan Lyu, Bo Li, Xingjun Ma

From this view, we identify two inherent characteristics of backdoor attacks as their weaknesses: 1) the models learn backdoored data much faster than learning with clean data, and the stronger the attack the faster the model converges on backdoored data; 2) the backdoor task is tied to a specific class (the backdoor target class).

How to Inject Backdoors with Better Consistency: Logit Anchoring on Clean Data

no code implementations3 Sep 2021 Zhiyuan Zhang, Lingjuan Lyu, Weiqiang Wang, Lichao Sun, Xu sun

In this work, we observe an interesting phenomenon that the variations of parameters are always AWPs when tuning the trained clean model to inject backdoors.

FedKD: Communication Efficient Federated Learning via Knowledge Distillation

no code implementations30 Aug 2021 Chuhan Wu, Fangzhao Wu, Ruixuan Liu, Lingjuan Lyu, Yongfeng Huang, Xing Xie

Instead of directly communicating the large models between clients and server, we propose an adaptive mutual distillation framework to reciprocally learn a student and a teacher model on each client, where only the student model is shared by different clients and updated collaboratively to reduce the communication cost.

Federated Learning Knowledge Distillation

Beyond Model Extraction: Imitation Attack for Black-Box NLP APIs

no code implementations29 Aug 2021 Qiongkai Xu, Xuanli He, Lingjuan Lyu, Lizhen Qu, Gholamreza Haffari

Machine-learning-as-a-service (MLaaS) has attracted millions of users to their outperforming sophisticated models.

Model extraction Unsupervised Domain Adaptation

A Novel Attribute Reconstruction Attack in Federated Learning

no code implementations16 Aug 2021 Lingjuan Lyu, Chen Chen

We perform the first systematic evaluation of attribute reconstruction attack (ARA) launched by the malicious server in the FL system, and empirically demonstrate that the shared epoch-averaged local model gradients can reveal sensitive attributes of local training data of any victim participant.

Federated Learning

A Vertical Federated Learning Framework for Graph Convolutional Network

no code implementations22 Jun 2021 Xiang Ni, Xiaolong Xu, Lingjuan Lyu, Changhua Meng, Weiqiang Wang

Recently, Graph Neural Network (GNN) has achieved remarkable success in various real-world problems on graph data.

Federated Learning Node Classification

Killing Two Birds with One Stone: Stealing Model and Inferring Attribute from BERT-based APIs

no code implementations23 May 2021 Lingjuan Lyu, Xuanli He, Fangzhao Wu, Lichao Sun

The advances in pre-trained models (e. g., BERT, XLNET and etc) have largely revolutionized the predictive performance of various modern natural language processing tasks.

Inference Attack Model extraction

Robust Training Using Natural Transformation

no code implementations10 May 2021 Shuo Wang, Lingjuan Lyu, Surya Nepal, Carsten Rudolph, Marthie Grobler, Kristen Moore

We target attributes of the input images that are independent of the class identification, and manipulate those attributes to mimic real-world natural transformations (NaTra) of the inputs, which are then used to augment the training dataset of the image classifier.

Data Augmentation Image Classification +1

Model Extraction and Adversarial Transferability, Your BERT is Vulnerable!

1 code implementation NAACL 2021 Xuanli He, Lingjuan Lyu, Qiongkai Xu, Lichao Sun

Finally, we investigate two defence strategies to protect the victim model and find that unless the performance of the victim model is sacrificed, both model ex-traction and adversarial transferability can effectively compromise the target models

Model extraction Text Classification +1

Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks

1 code implementation ICLR 2021 Yige Li, Xixiang Lyu, Nodens Koren, Lingjuan Lyu, Bo Li, Xingjun Ma

NAD utilizes a teacher network to guide the finetuning of the backdoored student network on a small clean subset of data such that the intermediate-layer attention of the student network aligns with that of the teacher network.

EXPLORING VULNERABILITIES OF BERT-BASED APIS

no code implementations1 Jan 2021 Xuanli He, Lingjuan Lyu, Lichao Sun, Xiaojun Chang, Jun Zhao

We then demonstrate how the extracted model can be exploited to develop effective attribute inference attack to expose sensitive information of the training data.

Inference Attack Model extraction +2

Privacy and Robustness in Federated Learning: Attacks and Defenses

no code implementations7 Dec 2020 Lingjuan Lyu, Han Yu, Xingjun Ma, Lichao Sun, Jun Zhao, Qiang Yang, Philip S. Yu

Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries.

Federated Learning

A Reputation Mechanism Is All You Need: Collaborative Fairness and Adversarial Robustness in Federated Learning

1 code implementation20 Nov 2020 Xinyi Xu, Lingjuan Lyu

In this paper, we propose a novel Robust and Fair Federated Learning (RFFL) framework to achieve collaborative fairness and adversarial robustness simultaneously via a reputation mechanism.

Adversarial Defense Adversarial Robustness +2

Differentially Private Representation for NLP: Formal Guarantee and An Empirical Study on Privacy and Fairness

2 code implementations Findings of the Association for Computational Linguistics 2020 Lingjuan Lyu, Xuanli He, Yitong Li

It has been demonstrated that hidden representation learned by a deep model can encode private information of the input, hence can be exploited to recover such information with reasonable accuracy.

Fairness

Federated Model Distillation with Noise-Free Differential Privacy

no code implementations11 Sep 2020 Lichao Sun, Lingjuan Lyu

Conventional federated learning directly averages model weights, which is only possible for collaboration between models with homogeneous architectures.

Federated Learning Model distillation

Collaborative Fairness in Federated Learning

1 code implementation27 Aug 2020 Lingjuan Lyu, Xinyi Xu, Qian Wang

In current deep learning paradigms, local training or the Standalone framework tends to result in overfitting and thus poor generalizability.

Fairness Federated Learning

Local Differential Privacy and Its Applications: A Comprehensive Survey

no code implementations9 Aug 2020 Mengmeng Yang, Lingjuan Lyu, Jun Zhao, Tianqing Zhu, Kwok-Yan Lam

Local differential privacy (LDP), as a strong privacy tool, has been widely deployed in the real world in recent years.

Cryptography and Security

Towards Differentially Private Text Representations

no code implementations25 Jun 2020 Lingjuan Lyu, Yitong Li, Xuanli He, Tong Xiao

Most deep learning frameworks require users to pool their local data or model updates to a trusted server to train or maintain a global model.

Local Differential Privacy based Federated Learning for Internet of Things

no code implementations19 Apr 2020 Yang Zhao, Jun Zhao, Mengmeng Yang, Teng Wang, Ning Wang, Lingjuan Lyu, Dusit Niyato, Kwok-Yan Lam

To avoid the privacy threat and reduce the communication cost, in this paper, we propose to integrate federated learning and local differential privacy (LDP) to facilitate the crowdsourcing applications to achieve the machine learning model.

Federated Learning

Threats to Federated Learning: A Survey

no code implementations4 Mar 2020 Lingjuan Lyu, Han Yu, Qiang Yang

It is thus of paramount importance to make FL system designers to be aware of the implications of future FL algorithm design on privacy-preservation.

Federated Learning

Privacy-Preserving Blockchain-Based Federated Learning for IoT Devices

no code implementations26 Jun 2019 Yang Zhao, Jun Zhao, Linshan Jiang, Rui Tan, Dusit Niyato, Zengxiang Li, Lingjuan Lyu, Yingbo Liu

To help manufacturers develop a smart home system, we design a federated learning (FL) system leveraging the reputation mechanism to assist home appliance manufacturers to train a machine learning model based on customers' data.

Edge-computing Federated Learning

Towards Fair and Privacy-Preserving Federated Deep Models

1 code implementation4 Jun 2019 Lingjuan Lyu, Jiangshan Yu, Karthik Nandakumar, Yitong Li, Xingjun Ma, Jiong Jin, Han Yu, Kee Siong Ng

This problem can be addressed by either a centralized framework that deploys a central server to train a global model on the joint data from all parties, or a distributed framework that leverages a parameter server to aggregate local model updates.

Fairness Federated Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.