Search Results for author: Lingjuan Lyu

Found 83 papers, 30 papers with code

How to Trace Latent Generative Model Generated Images without Artificial Watermark?

no code implementations22 May 2024 Zhenting Wang, Vikash Sehwag, Chen Chen, Lingjuan Lyu, Dimitris N. Metaxas, Shiqing Ma

To study this problem, we design a latent inversion based method called LatentTracer to trace the generated images of the inspected model by checking if the examined images can be well-reconstructed with an inverted latent input.

FedP3: Federated Personalized and Privacy-friendly Network Pruning under Model Heterogeneity

no code implementations15 Apr 2024 Kai Yi, Nidham Gazagnadou, Peter Richtárik, Lingjuan Lyu

The interest in federated learning has surged in recent research due to its unique ability to train a global model using privacy-secured information held locally on each client.

Federated Learning Network Pruning

Is Synthetic Image Useful for Transfer Learning? An Investigation into Data Generation, Volume, and Utilization

no code implementations28 Mar 2024 Yuhang Li, Xin Dong, Chen Chen, Jingtao Li, Yuxin Wen, Michael Spranger, Lingjuan Lyu

Synthetic image data generation represents a promising avenue for training deep learning models, particularly in the realm of transfer learning, where obtaining real images within a specific domain can be prohibitively expensive due to privacy and intellectual property considerations.

Transfer Learning

Finding needles in a haystack: A Black-Box Approach to Invisible Watermark Detection

no code implementations23 Mar 2024 Minzhou Pan, Zhenting Wang, Xin Dong, Vikash Sehwag, Lingjuan Lyu, Xue Lin

In this paper, we propose WaterMark Detection (WMD), the first invisible watermark detection method under a black-box and annotation-free setting.

FedMef: Towards Memory-efficient Federated Dynamic Pruning

no code implementations21 Mar 2024 Hong Huang, Weiming Zhuang, Chen Chen, Lingjuan Lyu

To address these challenges, we propose FedMef, a novel and memory-efficient federated dynamic pruning framework.

Federated Learning Network Pruning

Unveiling and Mitigating Memorization in Text-to-image Diffusion Models through Cross Attention

1 code implementation17 Mar 2024 Jie Ren, Yaxin Li, Shenglai Zen, Han Xu, Lingjuan Lyu, Yue Xing, Jiliang Tang

Recent advancements in text-to-image diffusion models have demonstrated their remarkable capability to generate high-quality images from textual prompts.


Minimum Topology Attacks for Graph Neural Networks

no code implementations5 Mar 2024 Mengmei Zhang, Xiao Wang, Chuan Shi, Lingjuan Lyu, Tianchi Yang, Junping Du

To break this dilemma, we propose a new type of topology attack, named minimum-budget topology attack, aiming to adaptively find the minimum perturbation sufficient for a successful attack on each node.

Defending Against Weight-Poisoning Backdoor Attacks for Parameter-Efficient Fine-Tuning

no code implementations19 Feb 2024 Shuai Zhao, Leilei Gan, Luu Anh Tuan, Jie Fu, Lingjuan Lyu, Meihuizi Jia, Jinming Wen

Motivated by this insight, we developed a Poisoned Sample Identification Module (PSIM) leveraging PEFT, which identifies poisoned samples through confidence, providing robust defense against weight-poisoning backdoor attacks.

Backdoor Attack text-classification +1

Privacy-preserving design of graph neural networks with applications to vertical federated learning

no code implementations31 Oct 2023 Ruofan Wu, Mingyang Zhang, Lingjuan Lyu, Xiaolong Xu, Xiuquan Hao, Xinyi Fu, Tengfei Liu, Tianyi Zhang, Weiqiang Wang

The paradigm of vertical federated learning (VFL), where institutions collaboratively train machine learning models via combining each other's local feature or label information, has achieved great success in applications to financial risk management (FRM).

Graph Representation Learning Management +2

MAS: Towards Resource-Efficient Federated Multiple-Task Learning

1 code implementation ICCV 2023 Weiming Zhuang, Yonggang Wen, Lingjuan Lyu, Shuai Zhang

Then, we present our new approach, MAS (Merge and Split), to optimize the performance of training multiple simultaneous FL tasks.

Federated Learning

Federated Learning over a Wireless Network: Distributed User Selection through Random Access

no code implementations7 Jul 2023 Chen Sun, Shiyao Ma, Ce Zheng, Songtao Wu, Tao Cui, Lingjuan Lyu

This study proposes a network intrinsic approach of distributed user selection that leverages the radio resource competition mechanism in random access.

Fairness Federated Learning

DIAGNOSIS: Detecting Unauthorized Data Usages in Text-to-image Diffusion Models

1 code implementation6 Jul 2023 Zhenting Wang, Chen Chen, Lingjuan Lyu, Dimitris N. Metaxas, Shiqing Ma

To address this issue, we propose a method for detecting such unauthorized data usage by planting the injected memorization into the text-to-image diffusion models trained on the protected dataset.


When Foundation Model Meets Federated Learning: Motivations, Challenges, and Future Directions

no code implementations27 Jun 2023 Weiming Zhuang, Chen Chen, Lingjuan Lyu

The intersection of the Foundation Model (FM) and Federated Learning (FL) provides mutual benefits, presents a unique opportunity to unlock new possibilities in AI research, and address critical challenges in AI and real-world applications.

Federated Learning Privacy Preserving

FedSampling: A Better Sampling Strategy for Federated Learning

no code implementations25 Jun 2023 Tao Qi, Fangzhao Wu, Lingjuan Lyu, Yongfeng Huang, Xing Xie

In this paper, instead of client uniform sampling, we propose a novel data uniform sampling strategy for federated learning (FedSampling), which can effectively improve the performance of federated learning especially when client data size distribution is highly imbalanced across clients.

Federated Learning Privacy Preserving

Pushing the Limits of ChatGPT on NLP Tasks

no code implementations16 Jun 2023 Xiaofei Sun, Linfeng Dong, Xiaoya Li, Zhen Wan, Shuhe Wang, Tianwei Zhang, Jiwei Li, Fei Cheng, Lingjuan Lyu, Fei Wu, Guoyin Wang

In this work, we propose a collection of general modules to address these issues, in an attempt to push the limits of ChatGPT on NLP tasks.

Dependency Parsing Event Extraction +9

FedWon: Triumphing Multi-domain Federated Learning Without Normalization

no code implementations9 Jun 2023 Weiming Zhuang, Lingjuan Lyu

Federated learning (FL) enhances data privacy with collaborative in-situ training on decentralized clients.

Domain Generalization Federated Learning

Revisiting Data-Free Knowledge Distillation with Poisoned Teachers

1 code implementation4 Jun 2023 Junyuan Hong, Yi Zeng, Shuyang Yu, Lingjuan Lyu, Ruoxi Jia, Jiayu Zhou

Data-free knowledge distillation (KD) helps transfer knowledge from a pre-trained model (known as the teacher model) to a smaller model (known as the student model) without access to the original training data used for training the teacher model.

Backdoor Defense for Data-Free Distillation with Poisoned Teachers Data-free Knowledge Distillation

Alteration-free and Model-agnostic Origin Attribution of Generated Images

no code implementations29 May 2023 Zhenting Wang, Chen Chen, Yi Zeng, Lingjuan Lyu, Shiqing Ma

To overcome this problem, we first develop an alteration-free and model-agnostic origin attribution method via input reverse-engineering on image generation models, i. e., inverting the input of a particular model for a specific image.

Image Generation

Reconstructive Neuron Pruning for Backdoor Defense

1 code implementation24 May 2023 Yige Li, Xixiang Lyu, Xingjun Ma, Nodens Koren, Lingjuan Lyu, Bo Li, Yu-Gang Jiang

Specifically, RNP first unlearns the neurons by maximizing the model's error on a small subset of clean samples and then recovers the neurons by minimizing the model's error on the same data.

backdoor defense

Are You Copying My Model? Protecting the Copyright of Large Language Models for EaaS via Backdoor Watermark

1 code implementation17 May 2023 Wenjun Peng, Jingwei Yi, Fangzhao Wu, Shangxi Wu, Bin Zhu, Lingjuan Lyu, Binxing Jiao, Tong Xu, Guangzhong Sun, Xing Xie

Companies have begun to offer Embedding as a Service (EaaS) based on these LLMs, which can benefit various natural language processing (NLP) tasks for customers.

Model extraction

DADFNet: Dual Attention and Dual Frequency-Guided Dehazing Network for Video-Empowered Intelligent Transportation

no code implementations19 Apr 2023 Yu Guo, Ryan Wen Liu, Jiangtian Nie, Lingjuan Lyu, Zehui Xiong, Jiawen Kang, Han Yu, Dusit Niyato

To eliminate the influences of adverse weather conditions, we propose a dual attention and dual frequency-guided dehazing network (termed DADFNet) for real-time visibility enhancement.

Management object-detection +1

Towards Adversarially Robust Continual Learning

no code implementations31 Mar 2023 Tao Bai, Chen Chen, Lingjuan Lyu, Jun Zhao, Bihan Wen

Recent studies show that models trained by continual learning can achieve the comparable performances as the standard supervised learning and the learning flexibility of continual learning models enables their wide applications in the real world.

Adversarial Robustness Continual Learning

Backdoor Attacks with Input-unique Triggers in NLP

no code implementations25 Mar 2023 Xukun Zhou, Jiwei Li, Tianwei Zhang, Lingjuan Lyu, Muqiao Yang, Jun He

Backdoor attack aims at inducing neural models to make incorrect predictions for poison data while keeping predictions on the clean dataset unchanged, which creates a considerable threat to current natural language processing (NLP) systems.

Backdoor Attack Language Modelling +1

A Pathway Towards Responsible AI Generated Content

no code implementations2 Mar 2023 Chen Chen, Jie Fu, Lingjuan Lyu

AI Generated Content (AIGC) has received tremendous attention within the past few years, with content generated in the format of image, text, audio, video, etc.


On the Hardness of Robustness Transfer: A Perspective from Rademacher Complexity over Symmetric Difference Hypothesis Space

no code implementations23 Feb 2023 Yuyang Deng, Nidham Gazagnadou, Junyuan Hong, Mehrdad Mahdavi, Lingjuan Lyu

Recent studies demonstrated that the adversarially robust learning under $\ell_\infty$ attack is harder to generalize to different domains than standard domain adaptation.

Binary Classification Domain Generalization +1

ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning Paradigms

1 code implementation22 Feb 2023 Minzhou Pan, Yi Zeng, Lingjuan Lyu, Xue Lin, Ruoxi Jia

However, we lack a thorough understanding of the applicability of existing detection methods across a variety of learning settings.

backdoor defense Self-Supervised Learning +1

InOR-Net: Incremental 3D Object Recognition Network for Point Cloud Representation

no code implementations20 Feb 2023 Jiahua Dong, Yang Cong, Gan Sun, Lixu Wang, Lingjuan Lyu, Jun Li, Ender Konukoglu

Moreover, they cannot explore which 3D geometric characteristics are essential to alleviate the catastrophic forgetting on old classes of 3D objects.

3D Object Recognition Fairness

Delving into the Adversarial Robustness of Federated Learning

no code implementations19 Feb 2023 Jie Zhang, Bo Li, Chen Chen, Lingjuan Lyu, Shuang Wu, Shouhong Ding, Chao Wu

In this work, we propose a novel algorithm called Decision Boundary based Federated Adversarial Training (DBFAT), which consists of two components (local re-weighting and global regularization) to improve both accuracy and robustness of FL systems.

Adversarial Robustness Federated Learning

Byzantine-Robust Learning on Heterogeneous Data via Gradient Splitting

1 code implementation13 Feb 2023 Yuchen Liu, Chen Chen, Lingjuan Lyu, Fangzhao Wu, Sai Wu, Gang Chen

In order to address this issue, we propose GAS, a \shorten approach that can successfully adapt existing robust AGRs to non-IID settings.

Federated Learning

MECTA: Memory-Economic Continual Test-Time Model Adaptation

2 code implementations ICLR 2023 Junyuan Hong, Lingjuan Lyu, Jiayu Zhou, Michael Spranger

The proposed MECTA is efficient and can be seamlessly plugged into state-of-theart CTA algorithms at negligible overhead on computation and memory.

Test-time Adaptation

SplitGNN: Splitting GNN for Node Classification with Heterogeneous Attention

no code implementations27 Jan 2023 Xiaolong Xu, Lingjuan Lyu, Yihong Dong, Yicheng Lu, Weiqiang Wang, Hong Jin

With the frequent happening of privacy leakage and the enactment of privacy laws across different countries, data owners are reluctant to directly share their raw data and labels with any other party.

Classification Federated Learning +1

DEJA VU: Continual Model Generalization For Unseen Domains

2 code implementations25 Jan 2023 Chenxi Liu, Lixu Wang, Lingjuan Lyu, Chen Sun, Xiao Wang, Qi Zhu

To overcome these limitations of DA and DG in handling the Unfamiliar Period during continual domain shift, we propose RaTP, a framework that focuses on improving models' target domain generalization (TDG) capability, while also achieving effective target domain adaptation (TDA) capability right after training on certain domains and forgetting alleviation (FA) capability on past domains.

Data Augmentation Domain Generalization

FedSkip: Combatting Statistical Heterogeneity with Federated Skip Aggregation

1 code implementation14 Dec 2022 Ziqing Fan, Yanfeng Wang, Jiangchao Yao, Lingjuan Lyu, Ya zhang, Qi Tian

However, in addition to previous explorations for improvement in federated averaging, our analysis shows that another critical bottleneck is the poorer optima of client models in more heterogeneous conditions.

Federated Learning

ResFed: Communication Efficient Federated Learning by Transmitting Deep Compressed Residuals

no code implementations11 Dec 2022 Rui Song, Liguo Zhou, Lingjuan Lyu, Andreas Festag, Alois Knoll

To address this bottleneck, we introduce a residual-based federated learning framework (ResFed), where residuals rather than model parameters are transmitted in communication networks for training.

Federated Learning Quantization

GNN-SL: Sequence Labeling Based on Nearest Examples via GNN

1 code implementation5 Dec 2022 Shuhe Wang, Yuxian Meng, Rongbin Ouyang, Jiwei Li, Tianwei Zhang, Lingjuan Lyu, Guoyin Wang

To better handle long-tail cases in the sequence labeling (SL) task, in this work, we introduce graph neural networks sequence labeling (GNN-SL), which augments the vanilla SL model output with similar tagging examples retrieved from the whole training set.

Chinese Word Segmentation named-entity-recognition +4

Outsourcing Training without Uploading Data via Efficient Collaborative Open-Source Sampling

no code implementations23 Oct 2022 Junyuan Hong, Lingjuan Lyu, Jiayu Zhou, Michael Spranger

As deep learning blooms with growing demand for computation and data resources, outsourcing model training to a powerful cloud server becomes an attractive alternative to training at a low-power and cost-effective end device.

Model Compression

Fine-mixing: Mitigating Backdoors in Fine-tuned Language Models

1 code implementation18 Oct 2022 Zhiyuan Zhang, Lingjuan Lyu, Xingjun Ma, Chenguang Wang, Xu sun

In this work, we take the first step to exploit the pre-trained (unfine-tuned) weights to mitigate backdoors in fine-tuned language models.

Language Modelling Sentence +4

Cross-Network Social User Embedding with Hybrid Differential Privacy Guarantees

1 code implementation4 Sep 2022 Jiaqian Ren, Lei Jiang, Hao Peng, Lingjuan Lyu, Zhiwei Liu, Chaochao Chen, Jia Wu, Xu Bai, Philip S. Yu

Integrating multiple online social networks (OSNs) has important implications for many downstream social mining tasks, such as user preference modelling, recommendation, and link prediction.

Attribute Link Prediction +2

RAIN: RegulArization on Input and Network for Black-Box Domain Adaptation

no code implementations22 Aug 2022 Qucheng Peng, Zhengming Ding, Lingjuan Lyu, Lichao Sun, Chen Chen

For the input-level, we design a new data augmentation technique as Phase MixUp, which highlights task-relevant objects in the interpolations, thus enhancing input-level regularization and class consistency for target models.

Data Augmentation Self-Knowledge Distillation +1

Accelerated Federated Learning with Decoupled Adaptive Optimization

no code implementations14 Jul 2022 Jiayin Jin, Jiaxiang Ren, Yang Zhou, Lingjuan Lyu, Ji Liu, Dejing Dou

The federated learning (FL) framework enables edge clients to collaboratively learn a shared inference model while keeping privacy of training data on clients.

Federated Learning

Turning a Curse into a Blessing: Enabling In-Distribution-Data-Free Backdoor Removal via Stabilized Model Inversion

no code implementations14 Jun 2022 Si Chen, Yi Zeng, Jiachen T. Wang, Won Park, Xun Chen, Lingjuan Lyu, Zhuoqing Mao, Ruoxi Jia

Our work is the first to provide a thorough understanding of leveraging model inversion for effective backdoor removal by addressing key questions about reconstructed samples' properties, perceptual similarity, and the potential presence of backdoor triggers.

FairVFL: A Fair Vertical Federated Learning Framework with Contrastive Adversarial Learning

1 code implementation7 Jun 2022 Tao Qi, Fangzhao Wu, Chuhan Wu, Lingjuan Lyu, Tong Xu, Zhongliang Yang, Yongfeng Huang, Xing Xie

In order to learn a fair unified representation, we send it to each platform storing fairness-sensitive features and apply adversarial learning to remove bias from the unified representation inherited from the biased data.

Fairness Privacy Preserving +1

Privacy for Free: How does Dataset Condensation Help Privacy?

1 code implementation1 Jun 2022 Tian Dong, Bo Zhao, Lingjuan Lyu

In this work, we for the first time identify that dataset condensation (DC) which is originally designed for improving training efficiency is also a better solution to replace the traditional data generators for private data generation, thus providing privacy for free.

Dataset Condensation Privacy Preserving

CalFAT: Calibrated Federated Adversarial Training with Label Skewness

1 code implementation30 May 2022 Chen Chen, Yuchen Liu, Xingjun Ma, Lingjuan Lyu

In this paper, we study the problem of FAT under label skewness, and reveal one root cause of the training instability and natural accuracy degradation issues: skewed labels lead to non-identical class probabilities and heterogeneous local models.

Adversarial Robustness Federated Learning

IDEAL: Query-Efficient Data-Free Learning from Black-box Models

1 code implementation23 May 2022 Jie Zhang, Chen Chen, Lingjuan Lyu

Knowledge Distillation (KD) is a typical method for training a lightweight student model with the help of a well-trained teacher model.

Knowledge Distillation

Data-Free Adversarial Knowledge Distillation for Graph Neural Networks

no code implementations8 May 2022 Yuanxin Zhuang, Lingjuan Lyu, Chuan Shi, Carl Yang, Lichao Sun

Graph neural networks (GNNs) have been widely used in modeling graph structured data, owing to its impressive performance in a wide range of practical applications.

Generative Adversarial Network Graph Classification +3

PrivateRec: Differentially Private Training and Serving for Federated News Recommendation

no code implementations18 Apr 2022 Ruixuan Liu, Yanlin Wang, Yang Cao, Lingjuan Lyu, Weike Pan, Yun Chen, Hong Chen

Collecting and training over sensitive personal data raise severe privacy concerns in personalized recommendation systems, and federated learning can potentially alleviate the problem by training models over decentralized user data. However, a theoretically private solution in both the training and serving stages of federated recommendation is essential but still lacking. Furthermore, naively applying differential privacy (DP) to the two stages in federated recommendation would fail to achieve a satisfactory trade-off between privacy and utility due to the high-dimensional characteristics of model gradients and hidden representations. In this work, we propose a federated news recommendation method for achieving a better utility in model training and online serving under a DP guarantee. We first clarify the DP definition over behavior data for each round in the life-circle of federated recommendation systems. Next, we propose a privacy-preserving online serving mechanism under this definition based on the idea of decomposing user embeddings with public basic vectors and perturbing the lower-dimensional combination coefficients.

Federated Learning News Recommendation +2

Narcissus: A Practical Clean-Label Backdoor Attack with Limited Information

2 code implementations11 Apr 2022 Yi Zeng, Minzhou Pan, Hoang Anh Just, Lingjuan Lyu, Meikang Qiu, Ruoxi Jia

With poisoning equal to or less than 0. 5% of the target-class data and 0. 05% of the training set, we can train a model to classify test examples from arbitrary classes into the target class when the examples are patched with a backdoor trigger.

Backdoor Attack Clean-label Backdoor Attack (0.024%) +1

No One Left Behind: Inclusive Federated Learning over Heterogeneous Devices

no code implementations16 Feb 2022 Ruixuan Liu, Fangzhao Wu, Chuhan Wu, Yanlin Wang, Lingjuan Lyu, Hong Chen, Xing Xie

In this way, all the clients can participate in the model learning in FL, and the final model can be big and powerful enough.

Federated Learning Knowledge Distillation +1

Exploiting Data Sparsity in Secure Cross-Platform Social Recommendation

no code implementations NeurIPS 2021 Jamie Cui, Chaochao Chen, Lingjuan Lyu, Carl Yang, Li Wang

As a result, our model can not only improve the recommendation performance of the rating platform by incorporating the sparse social data on the social platform, but also protect data privacy of both platforms.

Information Retrieval Retrieval

Differential Private Knowledge Transfer for Privacy-Preserving Cross-Domain Recommendation

no code implementations10 Feb 2022 Chaochao Chen, Huiwen Wu, Jiajie Su, Lingjuan Lyu, Xiaolin Zheng, Li Wang

To this end, PriCDR can not only protect the data privacy of the source domain, but also alleviate the data sparsity of the source domain.

Privacy Preserving Recommendation Systems +1

DENSE: Data-Free One-Shot Federated Learning

1 code implementation23 Dec 2021 Jie Zhang, Chen Chen, Bo Li, Lingjuan Lyu, Shuang Wu, Shouhong Ding, Chunhua Shen, Chao Wu

One-shot Federated Learning (FL) has recently emerged as a promising approach, which allows the central server to learn a model in a single communication round.

Federated Learning

Protecting Intellectual Property of Language Generation APIs with Lexical Watermark

1 code implementation5 Dec 2021 Xuanli He, Qiongkai Xu, Lingjuan Lyu, Fangzhao Wu, Chenguang Wang

Nowadays, due to the breakthrough in natural language generation (NLG), including machine translation, document summarization, image captioning, etc NLG models have been encapsulated in cloud APIs to serve over half a billion people worldwide and process over one hundred billion word generations per day.

Document Summarization Image Captioning +3

Gradient Driven Rewards to Guarantee Fairness in Collaborative Machine Learning

no code implementations NeurIPS 2021 Xinyi Xu, Lingjuan Lyu, Xingjun Ma, Chenglin Miao, Chuan Sheng Foo, Bryan Kian Hsiang Low

In this paper, we adopt federated learning as a gradient-based formalization of collaborative machine learning, propose a novel cosine gradient Shapley value to evaluate the agents’ uploaded model parameter updates/gradients, and design theoretically guaranteed fair rewards in the form of better model performance.

BIG-bench Machine Learning Fairness +1

Anti-Backdoor Learning: Training Clean Models on Poisoned Data

1 code implementation NeurIPS 2021 Yige Li, Xixiang Lyu, Nodens Koren, Lingjuan Lyu, Bo Li, Xingjun Ma

From this view, we identify two inherent characteristics of backdoor attacks as their weaknesses: 1) the models learn backdoored data much faster than learning with clean data, and the stronger the attack the faster the model converges on backdoored data; 2) the backdoor task is tied to a specific class (the backdoor target class).

Backdoor Attack

How to Inject Backdoors with Better Consistency: Logit Anchoring on Clean Data

no code implementations ICLR 2022 Zhiyuan Zhang, Lingjuan Lyu, Weiqiang Wang, Lichao Sun, Xu sun

In this work, we observe an interesting phenomenon that the variations of parameters are always AWPs when tuning the trained clean model to inject backdoors.

FedKD: Communication Efficient Federated Learning via Knowledge Distillation

no code implementations30 Aug 2021 Chuhan Wu, Fangzhao Wu, Lingjuan Lyu, Yongfeng Huang, Xing Xie

Instead of directly communicating the large models between clients and server, we propose an adaptive mutual distillation framework to reciprocally learn a student and a teacher model on each client, where only the student model is shared by different clients and updated collaboratively to reduce the communication cost.

Federated Learning Knowledge Distillation

A Novel Attribute Reconstruction Attack in Federated Learning

no code implementations16 Aug 2021 Lingjuan Lyu, Chen Chen

We perform the first systematic evaluation of attribute reconstruction attack (ARA) launched by the malicious server in the FL system, and empirically demonstrate that the shared epoch-averaged local model gradients can reveal sensitive attributes of local training data of any victim participant.

Attribute Federated Learning +1

A Vertical Federated Learning Framework for Graph Convolutional Network

no code implementations22 Jun 2021 Xiang Ni, Xiaolong Xu, Lingjuan Lyu, Changhua Meng, Weiqiang Wang

Recently, Graph Neural Network (GNN) has achieved remarkable success in various real-world problems on graph data.

Node Classification Privacy Preserving +1

Defending Against Backdoor Attacks in Natural Language Generation

1 code implementation3 Jun 2021 Xiaofei Sun, Xiaoya Li, Yuxian Meng, Xiang Ao, Lingjuan Lyu, Jiwei Li, Tianwei Zhang

The frustratingly fragile nature of neural network models make current natural language generation (NLG) systems prone to backdoor attacks and generate malicious sequences that could be sexist or offensive.

Backdoor Attack Dialogue Generation +2

Killing One Bird with Two Stones: Model Extraction and Attribute Inference Attacks against BERT-based APIs

no code implementations23 May 2021 Chen Chen, Xuanli He, Lingjuan Lyu, Fangzhao Wu

In this work, we bridge this gap by first presenting an effective model extraction attack, where the adversary can practically steal a BERT-based API (the target/victim model) by only querying a limited number of queries.

Attribute Inference Attack +4

Robust Training Using Natural Transformation

no code implementations10 May 2021 Shuo Wang, Lingjuan Lyu, Surya Nepal, Carsten Rudolph, Marthie Grobler, Kristen Moore

We target attributes of the input images that are independent of the class identification, and manipulate those attributes to mimic real-world natural transformations (NaTra) of the inputs, which are then used to augment the training dataset of the image classifier.

Attribute Data Augmentation +2

Model Extraction and Adversarial Transferability, Your BERT is Vulnerable!

1 code implementation NAACL 2021 Xuanli He, Lingjuan Lyu, Qiongkai Xu, Lichao Sun

Finally, we investigate two defence strategies to protect the victim model and find that unless the performance of the victim model is sacrificed, both model ex-traction and adversarial transferability can effectively compromise the target models

Model extraction text-classification +2

Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks

1 code implementation ICLR 2021 Yige Li, Xixiang Lyu, Nodens Koren, Lingjuan Lyu, Bo Li, Xingjun Ma

NAD utilizes a teacher network to guide the finetuning of the backdoored student network on a small clean subset of data such that the intermediate-layer attention of the student network aligns with that of the teacher network.


no code implementations1 Jan 2021 Xuanli He, Lingjuan Lyu, Lichao Sun, Xiaojun Chang, Jun Zhao

We then demonstrate how the extracted model can be exploited to develop effective attribute inference attack to expose sensitive information of the training data.

Attribute Inference Attack +4

Privacy and Robustness in Federated Learning: Attacks and Defenses

no code implementations7 Dec 2020 Lingjuan Lyu, Han Yu, Xingjun Ma, Chen Chen, Lichao Sun, Jun Zhao, Qiang Yang, Philip S. Yu

Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries.

Federated Learning Privacy Preserving

A Reputation Mechanism Is All You Need: Collaborative Fairness and Adversarial Robustness in Federated Learning

2 code implementations20 Nov 2020 Xinyi Xu, Lingjuan Lyu

In this paper, we propose a novel Robust and Fair Federated Learning (RFFL) framework to achieve collaborative fairness and adversarial robustness simultaneously via a reputation mechanism.

Adversarial Defense Adversarial Robustness +2

Differentially Private Representation for NLP: Formal Guarantee and An Empirical Study on Privacy and Fairness

2 code implementations Findings of the Association for Computational Linguistics 2020 Lingjuan Lyu, Xuanli He, Yitong Li

It has been demonstrated that hidden representation learned by a deep model can encode private information of the input, hence can be exploited to recover such information with reasonable accuracy.


Federated Model Distillation with Noise-Free Differential Privacy

no code implementations11 Sep 2020 Lichao Sun, Lingjuan Lyu

Conventional federated learning directly averages model weights, which is only possible for collaboration between models with homogeneous architectures.

Federated Learning

Collaborative Fairness in Federated Learning

1 code implementation27 Aug 2020 Lingjuan Lyu, Xinyi Xu, Qian Wang

In current deep learning paradigms, local training or the Standalone framework tends to result in overfitting and thus poor generalizability.

Fairness Federated Learning

Local Differential Privacy and Its Applications: A Comprehensive Survey

no code implementations9 Aug 2020 Mengmeng Yang, Lingjuan Lyu, Jun Zhao, Tianqing Zhu, Kwok-Yan Lam

Local differential privacy (LDP), as a strong privacy tool, has been widely deployed in the real world in recent years.

Cryptography and Security

Towards Differentially Private Text Representations

no code implementations25 Jun 2020 Lingjuan Lyu, Yitong Li, Xuanli He, Tong Xiao

Most deep learning frameworks require users to pool their local data or model updates to a trusted server to train or maintain a global model.

Vertically Federated Graph Neural Network for Privacy-Preserving Node Classification

no code implementations25 May 2020 Chaochao Chen, Jun Zhou, Longfei Zheng, Huiwen Wu, Lingjuan Lyu, Jia Wu, Bingzhe Wu, Ziqi Liu, Li Wang, Xiaolin Zheng

Recently, Graph Neural Network (GNN) has achieved remarkable progresses in various real-world tasks on graph data, consisting of node features and the adjacent information between different nodes.

Classification General Classification +2

Local Differential Privacy based Federated Learning for Internet of Things

no code implementations19 Apr 2020 Yang Zhao, Jun Zhao, Mengmeng Yang, Teng Wang, Ning Wang, Lingjuan Lyu, Dusit Niyato, Kwok-Yan Lam

To avoid the privacy threat and reduce the communication cost, in this paper, we propose to integrate federated learning and local differential privacy (LDP) to facilitate the crowdsourcing applications to achieve the machine learning model.

BIG-bench Machine Learning Federated Learning +1

Threats to Federated Learning: A Survey

no code implementations4 Mar 2020 Lingjuan Lyu, Han Yu, Qiang Yang

It is thus of paramount importance to make FL system designers to be aware of the implications of future FL algorithm design on privacy-preservation.

Federated Learning

Privacy-Preserving Blockchain-Based Federated Learning for IoT Devices

no code implementations26 Jun 2019 Yang Zhao, Jun Zhao, Linshan Jiang, Rui Tan, Dusit Niyato, Zengxiang Li, Lingjuan Lyu, Yingbo Liu

To help manufacturers develop a smart home system, we design a federated learning (FL) system leveraging the reputation mechanism to assist home appliance manufacturers to train a machine learning model based on customers' data.

Edge-computing Federated Learning +1

Towards Fair and Privacy-Preserving Federated Deep Models

1 code implementation4 Jun 2019 Lingjuan Lyu, Jiangshan Yu, Karthik Nandakumar, Yitong Li, Xingjun Ma, Jiong Jin, Han Yu, Kee Siong Ng

This problem can be addressed by either a centralized framework that deploys a central server to train a global model on the joint data from all parties, or a distributed framework that leverages a parameter server to aggregate local model updates.

Benchmarking Fairness +3

Cannot find the paper you are looking for? You can Submit a new open access paper.