Search Results for author: Tianlin Li

Found 15 papers, 1 papers with code

CaBaFL: Asynchronous Federated Learning via Hierarchical Cache and Feature Balance

no code implementations19 Apr 2024 Zeke Xia, Ming Hu, Dengke Yan, Xiaofei Xie, Tianlin Li, Anran Li, Junlong Zhou, Mingsong Chen

To address the problem of imbalanced data, the feature balance-guided device selection strategy in CaBaFL adopts the activation distribution as a metric, which enables each intermediate model to be trained across devices with totally balanced data distributions before aggregation.

BadEdit: Backdooring large language models by model editing

no code implementations20 Mar 2024 Yanzhou Li, Tianlin Li, Kangjie Chen, Jian Zhang, Shangqing Liu, Wenhan Wang, Tianwei Zhang, Yang Liu

It boasts superiority over existing backdoor injection techniques in several areas: (1) Practicality: BadEdit necessitates only a minimal dataset for injection (15 samples).

Backdoor Attack knowledge editing

Your Large Language Model is Secretly a Fairness Proponent and You Should Prompt it Like One

no code implementations19 Feb 2024 Tianlin Li, XiaoYu Zhang, Chao Du, Tianyu Pang, Qian Liu, Qing Guo, Chao Shen, Yang Liu

Building on this insight and observation, we develop FairThinking, a pipeline designed to automatically generate roles that enable LLMs to articulate diverse perspectives for fair expressions.

Fairness Language Modelling +1

Purifying Large Language Models by Ensembling a Small Language Model

no code implementations19 Feb 2024 Tianlin Li, Qian Liu, Tianyu Pang, Chao Du, Qing Guo, Yang Liu, Min Lin

The emerging success of large language models (LLMs) heavily relies on collecting abundant training data from external (untrusted) sources.

Data Poisoning Language Modelling

FoolSDEdit: Deceptively Steering Your Edits Towards Targeted Attribute-aware Distribution

no code implementations6 Feb 2024 Qi Zhou, Dongxia Wang, Tianlin Li, Zhihong Xu, Yang Liu, Kui Ren, Wenhai Wang, Qing Guo

To expose this potential vulnerability, we aim to build an adversarial attack forcing SDEdit to generate a specific data distribution aligned with a specified attribute (e. g., female), without changing the input's attribute characteristics.

Adversarial Attack Attribute +1

IRAD: Implicit Representation-driven Image Resampling against Adversarial Attacks

1 code implementation18 Oct 2023 Yue Cao, Tianlin Li, Xiaofeng Cao, Ivor Tsang, Yang Liu, Qing Guo

The underlying rationale behind our idea is that image resampling can alleviate the influence of adversarial perturbations while preserving essential semantic information, thereby conferring an inherent advantage in defending against adversarial attacks.

Adversarial Robustness

FAIRER: Fairness as Decision Rationale Alignment

no code implementations27 Jun 2023 Tianlin Li, Qing Guo, Aishan Liu, Mengnan Du, Zhiming Li, Yang Liu

Existing fairness regularization terms fail to achieve decision rationale alignment because they only constrain last-layer outputs while ignoring intermediate neuron alignment.

Fairness

On the Robustness of Segment Anything

no code implementations25 May 2023 Yihao Huang, Yue Cao, Tianlin Li, Felix Juefei-Xu, Di Lin, Ivor W. Tsang, Yang Liu, Qing Guo

Second, we extend representative adversarial attacks against SAM and study the influence of different prompts on robustness.

Autonomous Vehicles valid

Personalization as a Shortcut for Few-Shot Backdoor Attack against Text-to-Image Diffusion Models

no code implementations18 May 2023 Yihao Huang, Felix Juefei-Xu, Qing Guo, Jie Zhang, Yutong Wu, Ming Hu, Tianlin Li, Geguang Pu, Yang Liu

Although recent personalization methods have democratized high-resolution image synthesis by enabling swift concept acquisition with minimal examples and lightweight computation, they also present an exploitable avenue for high accessible backdoor attacks.

Backdoor Attack Image Generation

NPC: Neuron Path Coverage via Characterizing Decision Logic of Deep Neural Networks

no code implementations24 Mar 2022 Xiaofei Xie, Tianlin Li, Jian Wang, Lei Ma, Qing Guo, Felix Juefei-Xu, Yang Liu

Inspired by software testing, a number of structural coverage criteria are designed and proposed to measure the test adequacy of DNNs.

Defect Detection DNN Testing +1

Unveiling Project-Specific Bias in Neural Code Models

no code implementations19 Jan 2022 Zhiming Li, Yanzhou Li, Tianlin Li, Mengnan Du, Bozhi Wu, Yushi Cao, Junzhe Jiang, Yang Liu

We propose a Cond-Idf measurement to interpret this behavior, which quantifies the relatedness of a token with a label and its project-specificness.

Adversarial Robustness Vulnerability Detection

Interpreting and Improving Adversarial Robustness of Deep Neural Networks with Neuron Sensitivity

no code implementations16 Sep 2019 Chongzhi Zhang, Aishan Liu, Xianglong Liu, Yitao Xu, Hang Yu, Yuqing Ma, Tianlin Li

In this paper, we first draw the close connection between adversarial robustness and neuron sensitivities, as sensitive neurons make the most non-trivial contributions to model predictions in the adversarial setting.

Adversarial Robustness Decision Making

Cannot find the paper you are looking for? You can Submit a new open access paper.