Search Results for author: Tianlin Li

Found 24 papers, 4 papers with code

Benchmarking Bias in Large Language Models during Role-Playing

no code implementations1 Nov 2024 Xinyue Li, Zhenpeng Chen, Jie M. Zhang, Yiling Lou, Tianlin Li, Weisong Sun, Yang Liu, Xuanzhe Liu

Our benchmark reveals 72, 716 biased responses across the studied LLMs, with individual models yielding between 7, 754 and 16, 963 biased responses, underscoring the prevalence of bias in role-playing contexts.

Benchmarking Fairness +1

A Survey on Physical Adversarial Attacks against Face Recognition Systems

no code implementations10 Oct 2024 Mingsi Wang, Jiachen Zhou, Tianlin Li, Guozhu Meng, Kai Chen

However, a systematic overview focused on physical adversarial attacks against FR systems is still lacking, hindering an in-depth exploration of the challenges and future directions in this field.

Adversarial Attack Face Recognition +1

Speculative Coreset Selection for Task-Specific Fine-tuning

no code implementations2 Oct 2024 XiaoYu Zhang, Juan Zhai, Shiqing Ma, Chao Shen, Tianlin Li, Weipeng Jiang, Yang Liu

Task-specific fine-tuning is essential for the deployment of large language models (LLMs), but it requires significant computational resources and time.

Federated Graph Learning with Adaptive Importance-based Sampling

no code implementations23 Sep 2024 Anran Li, YuanYuan Chen, Chao Ren, Wenhan Wang, Ming Hu, Tianlin Li, Han Yu, Qingyu Chen

For privacy-preserving graph learning tasks involving distributed graph datasets, federated learning (FL)-based GCN (FedGCN) training is required.

Federated Learning Graph Sampling +1

Dormant: Defending against Pose-driven Human Image Animation

1 code implementation22 Sep 2024 Jiachen Zhou, Mingsi Wang, Tianlin Li, Guozhu Meng, Kai Chen

Dormant applies protective perturbation to one human image, preserving the visual similarity to the original but resulting in poor-quality video generation.

Image Animation Video Generation

Perception-guided Jailbreak against Text-to-Image Models

no code implementations20 Aug 2024 Yihao Huang, Le Liang, Tianlin Li, Xiaojun Jia, Run Wang, Weikai Miao, Geguang Pu, Yang Liu

Specifically, we propose identifying a safe phrase that is similar in human perception yet inconsistent in text semantics with the target unsafe word and using it as a substitution.

Compromising Embodied Agents with Contextual Backdoor Attacks

no code implementations6 Aug 2024 Aishan Liu, Yuguang Zhou, Xianglong Liu, Tianyuan Zhang, Siyuan Liang, Jiakai Wang, Yanjun Pu, Tianlin Li, Junqi Zhang, Wenbo Zhou, Qing Guo, DaCheng Tao

To enable context-dependent behaviors in downstream agents, we implement a dual-modality activation strategy that controls both the generation and execution of program defects through textual and visual triggers.

Autonomous Driving Robot Manipulation +1

NeuSemSlice: Towards Effective DNN Model Maintenance via Neuron-level Semantic Slicing

no code implementations26 Jul 2024 Shide Zhou, Tianlin Li, Yihao Huang, Ling Shi, Kailong Wang, Yang Liu, Haoyu Wang

In this work, we implement NeuSemSlice, a novel framework that introduces the semantic slicing technique to effectively identify critical neuron-level semantic components in DNN models for semantic-aware model maintenance tasks.

Model Compression Semantic Similarity +1

MAVIN: Multi-Action Video Generation with Diffusion Models via Transition Video Infilling

1 code implementation28 May 2024 BoWen Zhang, Xiaofei Xie, Haotian Lu, Na Ma, Tianlin Li, Qing Guo

The core challenge lies in generating smooth and natural transitions between these segments given the inherent complexity and variability of action transitions.

Video Generation

CaBaFL: Asynchronous Federated Learning via Hierarchical Cache and Feature Balance

no code implementations19 Apr 2024 Zeke Xia, Ming Hu, Dengke Yan, Xiaofei Xie, Tianlin Li, Anran Li, Junlong Zhou, Mingsong Chen

To address the problem of imbalanced data, the feature balance-guided device selection strategy in CaBaFL adopts the activation distribution as a metric, which enables each intermediate model to be trained across devices with totally balanced data distributions before aggregation.

Federated Learning

BadEdit: Backdooring large language models by model editing

1 code implementation20 Mar 2024 Yanzhou Li, Tianlin Li, Kangjie Chen, Jian Zhang, Shangqing Liu, Wenhan Wang, Tianwei Zhang, Yang Liu

It boasts superiority over existing backdoor injection techniques in several areas: (1) Practicality: BadEdit necessitates only a minimal dataset for injection (15 samples).

Backdoor Attack knowledge editing

Purifying Large Language Models by Ensembling a Small Language Model

no code implementations19 Feb 2024 Tianlin Li, Qian Liu, Tianyu Pang, Chao Du, Qing Guo, Yang Liu, Min Lin

The emerging success of large language models (LLMs) heavily relies on collecting abundant training data from external (untrusted) sources.

Data Poisoning Language Modelling

Your Large Language Model is Secretly a Fairness Proponent and You Should Prompt it Like One

no code implementations19 Feb 2024 Tianlin Li, XiaoYu Zhang, Chao Du, Tianyu Pang, Qian Liu, Qing Guo, Chao Shen, Yang Liu

Building on this insight and observation, we develop FairThinking, a pipeline designed to automatically generate roles that enable LLMs to articulate diverse perspectives for fair expressions.

Fairness Language Modelling +1

FoolSDEdit: Deceptively Steering Your Edits Towards Targeted Attribute-aware Distribution

no code implementations6 Feb 2024 Qi Zhou, Dongxia Wang, Tianlin Li, Zhihong Xu, Yang Liu, Kui Ren, Wenhai Wang, Qing Guo

To expose this potential vulnerability, we aim to build an adversarial attack forcing SDEdit to generate a specific data distribution aligned with a specified attribute (e. g., female), without changing the input's attribute characteristics.

Adversarial Attack Attribute +1

IRAD: Implicit Representation-driven Image Resampling against Adversarial Attacks

1 code implementation18 Oct 2023 Yue Cao, Tianlin Li, Xiaofeng Cao, Ivor Tsang, Yang Liu, Qing Guo

The underlying rationale behind our idea is that image resampling can alleviate the influence of adversarial perturbations while preserving essential semantic information, thereby conferring an inherent advantage in defending against adversarial attacks.

Adversarial Robustness

FAIRER: Fairness as Decision Rationale Alignment

no code implementations27 Jun 2023 Tianlin Li, Qing Guo, Aishan Liu, Mengnan Du, Zhiming Li, Yang Liu

Existing fairness regularization terms fail to achieve decision rationale alignment because they only constrain last-layer outputs while ignoring intermediate neuron alignment.

Fairness

On the Robustness of Segment Anything

no code implementations25 May 2023 Yihao Huang, Yue Cao, Tianlin Li, Felix Juefei-Xu, Di Lin, Ivor W. Tsang, Yang Liu, Qing Guo

Second, we extend representative adversarial attacks against SAM and study the influence of different prompts on robustness.

Autonomous Vehicles valid

Personalization as a Shortcut for Few-Shot Backdoor Attack against Text-to-Image Diffusion Models

no code implementations18 May 2023 Yihao Huang, Felix Juefei-Xu, Qing Guo, Jie Zhang, Yutong Wu, Ming Hu, Tianlin Li, Geguang Pu, Yang Liu

Although recent personalization methods have democratized high-resolution image synthesis by enabling swift concept acquisition with minimal examples and lightweight computation, they also present an exploitable avenue for high accessible backdoor attacks.

Backdoor Attack Image Generation

NPC: Neuron Path Coverage via Characterizing Decision Logic of Deep Neural Networks

no code implementations24 Mar 2022 Xiaofei Xie, Tianlin Li, Jian Wang, Lei Ma, Qing Guo, Felix Juefei-Xu, Yang Liu

Inspired by software testing, a number of structural coverage criteria are designed and proposed to measure the test adequacy of DNNs.

Defect Detection DNN Testing +2

Unveiling Project-Specific Bias in Neural Code Models

no code implementations19 Jan 2022 Zhiming Li, Yanzhou Li, Tianlin Li, Mengnan Du, Bozhi Wu, Yushi Cao, Junzhe Jiang, Yang Liu

We propose a Cond-Idf measurement to interpret this behavior, which quantifies the relatedness of a token with a label and its project-specificness.

Adversarial Robustness Vulnerability Detection

Interpreting and Improving Adversarial Robustness of Deep Neural Networks with Neuron Sensitivity

no code implementations16 Sep 2019 Chongzhi Zhang, Aishan Liu, Xianglong Liu, Yitao Xu, Hang Yu, Yuqing Ma, Tianlin Li

In this paper, we first draw the close connection between adversarial robustness and neuron sensitivities, as sensitive neurons make the most non-trivial contributions to model predictions in the adversarial setting.

Adversarial Robustness Decision Making

Cannot find the paper you are looking for? You can Submit a new open access paper.