Search Results for author: Chao Shen

Found 50 papers, 18 papers with code

Physical 3D Adversarial Attacks against Monocular Depth Estimation in Autonomous Driving

1 code implementation26 Mar 2024 Junhao Zheng, Chenhao Lin, Jiahao Sun, Zhengyu Zhao, Qian Li, Chao Shen

Deep learning-based monocular depth estimation (MDE), extensively applied in autonomous driving, is known to be vulnerable to adversarial attacks.

Adversarial Attack Autonomous Driving +1

Instructing Large Language Models to Identify and Ignore Irrelevant Conditions

1 code implementation19 Mar 2024 Zhenyu Wu, Chao Shen, Meng Jiang

Lastly it instructs the LLMs with the verification on relevant and irrelevant conditions to avoid confusion and improve reasoning paths.

Math Mathematical Reasoning

Your Large Language Model is Secretly a Fairness Proponent and You Should Prompt it Like One

no code implementations19 Feb 2024 Tianlin Li, XiaoYu Zhang, Chao Du, Tianyu Pang, Qian Liu, Qing Guo, Chao Shen, Yang Liu

Building on this insight and observation, we develop FairThinking, a pipeline designed to automatically generate roles that enable LLMs to articulate diverse perspectives for fair expressions.

Fairness Language Modelling +1

Stumbling Blocks: Stress Testing the Robustness of Machine-Generated Text Detectors Under Attacks

1 code implementation18 Feb 2024 Yichen Wang, Shangbin Feng, Abe Bohan Hou, Xiao Pu, Chao Shen, Xiaoming Liu, Yulia Tsvetkov, Tianxing He

Our experiments reveal that almost none of the existing detectors remain robust under all the attacks, and all detectors exhibit different loopholes.

BackdoorBench: A Comprehensive Benchmark and Analysis of Backdoor Learning

no code implementations26 Jan 2024 Baoyuan Wu, Hongrui Chen, Mingda Zhang, Zihao Zhu, Shaokui Wei, Danni Yuan, Mingli Zhu, Ruotong Wang, Li Liu, Chao Shen

We hope that our efforts could build a solid foundation of backdoor learning to facilitate researchers to investigate existing algorithms, develop more innovative algorithms, and explore the intrinsic mechanism of backdoor learning.

Backdoor Attack

DREAM: Debugging and Repairing AutoML Pipelines

no code implementations31 Dec 2023 XiaoYu Zhang, Juan Zhai, Shiqing Ma, Chao Shen

In response to the challenge of model design, researchers proposed Automated Machine Learning (AutoML) systems, which automatically search for model architecture and hyperparameters for a given task.

AutoML

SlowTrack: Increasing the Latency of Camera-based Perception in Autonomous Driving Using Adversarial Examples

no code implementations15 Dec 2023 Chen Ma, Ningfei Wang, Qi Alfred Chen, Chao Shen

Our evaluation results show that the system-level effects can be significantly improved, i. e., the vehicle crash rate of SlowTrack is around 95% on average while existing works only have around 30%.

Autonomous Driving object-detection +1

Collapse-Aware Triplet Decoupling for Adversarially Robust Image Retrieval

no code implementations12 Dec 2023 Qiwei Tian, Chenhao Lin, Zhengyu Zhao, Qian Li, Chao Shen

Furthermore, CA prevents the consequential model collapse, based on a novel metric, collapseness, which is incorporated into the optimization of perturbation.

Adversarial Defense Image Retrieval +2

Get an A in Math: Progressive Rectification Prompting

1 code implementation11 Dec 2023 Zhenyu Wu, Meng Jiang, Chao Shen

Given an initial answer from CoT, PRP iterates a verify-then-rectify process to progressively identify incorrect answers and rectify the reasoning paths.

Math

Universal Deoxidation of Semiconductor Substrates Assisted by Machine-Learning and Real-Time-Feedback-Control

no code implementations4 Dec 2023 Chao Shen, Wenkang Zhan, Jian Tang, Zhaofeng Wu, Bo Xu, Chao Zhao, Zhanguo Wang

It standardizes deoxidation temperatures across various equipment and substrate materials, advancing the standardization research process in semiconductor preparation, a significant milestone in thin film growth technology.

Towards Deep Learning Models Resistant to Transfer-based Adversarial Attacks via Data-centric Robust Learning

no code implementations15 Oct 2023 Yulong Yang, Chenhao Lin, Xiang Ji, Qiwei Tian, Qian Li, Hongshan Yang, Zhibo Wang, Chao Shen

Instead, a one-shot adversarial augmentation prior to training is sufficient, and we name this new defense paradigm Data-centric Robust Learning (DRL).

Fairness

Exploiting Facial Relationships and Feature Aggregation for Multi-Face Forgery Detection

no code implementations7 Oct 2023 Chenhao Lin, Fangbin Yi, Hang Wang, Qian Li, Deng Jingyi, Chao Shen

Face forgery techniques have emerged as a forefront concern, and numerous detection approaches have been proposed to address this challenge.

Dialogue for Prompting: a Policy-Gradient-Based Discrete Prompt Generation for Few-shot Learning

1 code implementation14 Aug 2023 Chengzhengxu Li, Xiaoming Liu, Yichen Wang, Duyi Li, Yu Lan, Chao Shen

However, prior discrete prompt optimization methods require expert knowledge to design the base prompt set and identify high-quality prompts, which is costly, inefficient, and subjective.

Few-Shot Learning Reinforcement Learning (RL)

Hard Adversarial Example Mining for Improving Robust Fairness

no code implementations3 Aug 2023 Chenhao Lin, Xiang Ji, Yulong Yang, Qian Li, Chao Shen, Run Wang, Liming Fang

Adversarial training (AT) is widely considered the state-of-the-art technique for improving the robustness of deep neural networks (DNNs) against adversarial examples (AE).

Fairness

Machine-Learning-Assisted and Real-Time-Feedback-Controlled Growth of InAs/GaAs Quantum Dots

no code implementations22 Jun 2023 Chao Shen, Wenkang Zhan, Kaiyao Xin, Manyang Li, Zhenyu Sun, Hui Cong, Chi Xu, Jian Tang, Zhaofeng Wu, Bo Xu, Zhongming Wei, Chunlai Xue, Chao Zhao, Zhanguo Wang

Self-assembled InAs/GaAs quantum dots (QDs) have properties highly valuable for developing various optoelectronic devices such as QD lasers and single photon sources.

IMAP: Intrinsically Motivated Adversarial Policy

no code implementations4 May 2023 Xiang Zheng, Xingjun Ma, Shengjie Wang, Xinyu Wang, Chao Shen, Cong Wang

Our experiments validate the effectiveness of the four types of adversarial intrinsic regularizers and BR in enhancing black-box adversarial policy learning across a variety of environments.

Reinforcement Learning (RL)

CILIATE: Towards Fairer Class-based Incremental Learning by Dataset and Training Refinement

no code implementations9 Apr 2023 Xuanqi Gao, Juan Zhai, Shiqing Ma, Chao Shen, Yufei Chen, Shiwei Wang

The common practice leverages incremental learning (IL), e. g., Class-based Incremental Learning (CIL) that updates output labels, to update the model with new data and a limited number of old data.

Fairness Incremental Learning

End-to-end Face-swapping via Adaptive Latent Representation Learning

no code implementations7 Mar 2023 Chenhao Lin, Pengbin Hu, Chao Shen, Qian Li

Taking full advantage of the excellent performance of StyleGAN, style transfer-based face swapping methods have been extensively investigated recently.

Attribute Face Swapping +2

CoCo: Coherence-Enhanced Machine-Generated Text Detection Under Data Limitation With Contrastive Learning

1 code implementation20 Dec 2022 Xiaoming Liu, Zhaohan Zhang, Yichen Wang, Hang Pu, Yu Lan, Chao Shen

Machine-Generated Text (MGT) detection, a task that discriminates MGT from Human-Written Text (HWT), plays a crucial role in preventing misuse of text generative models, which excel in mimicking human writing style recently.

Contrastive Learning Text Detection

Amplifying Membership Exposure via Data Poisoning

1 code implementation1 Nov 2022 Yufei Chen, Chao Shen, Yun Shen, Cong Wang, Yang Zhang

In this paper, we investigate the third type of exploitation of data poisoning - increasing the risks of privacy leakage of benign training samples.

Data Poisoning Overall - Test +1

BackdoorBench: A Comprehensive Benchmark of Backdoor Learning

1 code implementation25 Jun 2022 Baoyuan Wu, Hongrui Chen, Mingda Zhang, Zihao Zhu, Shaokui Wei, Danni Yuan, Chao Shen

However, we find that the evaluations of new methods are often unthorough to verify their claims and accurate performance, mainly due to the rapid development, diverse settings, and the difficulties of implementation and reproducibility.

Backdoor Attack

FairNeuron: Improving Deep Neural Network Fairness with Adversary Games on Selective Neurons

1 code implementation6 Apr 2022 Xuanqi Gao, Juan Zhai, Shiqing Ma, Chao Shen, Yufei Chen, Qian Wang

To solve this issue, there has been a number of work trying to improve model fairness by using an adversarial game in model level.

Fairness

Energy-optimal Three-dimensional Path-following Control of Autonomous Underwater Vehicles under Ocean Currents

no code implementations22 Mar 2022 Niankai Yang, Chao Shen, Matthew Johnson-Roberson, Jing Sun

In the first stage, the surge velocity, heave velocity, and pitch angle setpoints are optimized by minimizing the required vehicle propulsion energy under currents, and the line-of-sight (LOS) guidance law is used to generate the yaw angle setpoint that ensures path following.

Towards Benchmarking and Evaluating Deepfake Detection

no code implementations4 Mar 2022 Chenhao Lin, Jingyi Deng, Pengbin Hu, Chao Shen, Qian Wang, Qi Li

Deepfake detection automatically recognizes the manipulated medias through the analysis of the difference between manipulated and non-altered videos.

Benchmarking DeepFake Detection +1

Property Inference Attacks Against GANs

1 code implementation15 Nov 2021 Junhao Zhou, Yufei Chen, Chao Shen, Yang Zhang

In addition, we show that our attacks can be used to enhance the performance of membership inference against GANs.

Attribute Fairness +1

Anti-Distillation Backdoor Attacks: Backdoors Can Really Survive in Knowledge Distillation

1 code implementation MM - Proceedings of the ACM International Conference on Multimedia 2021 Yunjie Ge, Qian Wang, Baolin Zheng, Xinlu Zhuang, Qi Li, Chao Shen, Cong Wang

In this paper, we, for the first time, propose a novel Anti-Distillation Backdoor Attack (ADBA), in which the backdoor embedded in the public teacher model can survive the knowledge distillation process and thus be transferred to secret distilled student models.

Backdoor Attack Knowledge Distillation

Black-box Adversarial Attacks on Commercial Speech Platforms with Minimal Information

no code implementations19 Oct 2021 Baolin Zheng, Peipei Jiang, Qian Wang, Qi Li, Chao Shen, Cong Wang, Yunjie Ge, Qingyang Teng, Shenyi Zhang

For commercial cloud speech APIs, we propose Occam, a decision-only black-box adversarial attack, where only final decisions are available to the adversary.

Adversarial Attack Speaker Recognition

Optimal Operation of a Hydrogen-based Building Multi-Energy System Based on Deep Reinforcement Learning

no code implementations22 Sep 2021 Liang Yu, Shuqi Qin, Zhanbo Xu, Xiaohong Guan, Chao Shen, Dong Yue

To overcome the challenge, we reformulate the problem as a Markov game and propose an energy management algorithm to solve it based on multi-agent discrete actor-critic with rules (MADACR).

energy management Management +1

Teacher Model Fingerprinting Attacks Against Transfer Learning

2 code implementations23 Jun 2021 Yufei Chen, Chao Shen, Cong Wang, Yang Zhang

To this end, we propose a teacher model fingerprinting attack to infer the origin of a student model, i. e., the teacher model it transfers from.

Transfer Learning

CARTL: Cooperative Adversarially-Robust Transfer Learning

1 code implementation12 Jun 2021 Dian Chen, Hongxin Hu, Qian Wang, Yinli Li, Cong Wang, Chao Shen, Qi Li

In deep learning, a typical strategy for transfer learning is to freeze the early layers of a pre-trained model and fine-tune the rest of its layers on the target domain.

Adversarial Robustness Transfer Learning

Infer-AVAE: An Attribute Inference Model Based on Adversarial Variational Autoencoder

no code implementations30 Dec 2020 Yadong Zhou, Zhihao Ding, Xiaoming Liu, Chao Shen, Lingling Tong, Xiaohong Guan

While using the trending graph neural networks (GNNs) as encoder has the problem that GNNs aggregate redundant information from neighborhood and generate indistinguishable user representations, which is known as over-smoothing.

Attribute

Unify Local and Global Information for Top-$N$ Recommendation

1 code implementation3 Dec 2020 Xiaoming Liu, Shaocong Wu, Zhaohan Zhang, Chao Shen

To tackle this research gap, we propose a novel duet representation learning framework named \sysname to fuse local information (user-item interaction data) and global information (external knowledge graph) for the top-$N$ recommendation, which is composed of two separate sub-models.

Knowledge Graph Embedding Recommendation Systems +1

Optimal Resource Allocation for Delay Minimization in NOMA-MEC Networks

no code implementations11 Sep 2020 Fang Fang, Yanqing Xu, Zhiguo Ding, Chao Shen, Mugen Peng, George K. Karagiannidis

We adopt the partial offloading policy, in which each user can partition its computation task into offloading and locally computing parts.

Edge-computing

Can We Mitigate Backdoor Attack Using Adversarial Detection Methods?

1 code implementation26 Jun 2020 Kaidi Jin, Tianwei Zhang, Chao Shen, Yufei Chen, Ming Fan, Chenhao Lin, Ting Liu

It is unknown whether there are any connections and common characteristics between the defenses against these two attacks.

Adversarial Defense Backdoor Attack

Multi-Agent Deep Reinforcement Learning for HVAC Control in Commercial Buildings

no code implementations25 Jun 2020 Liang Yu, Yi Sun, Zhanbo Xu, Chao Shen, Dong Yue, Tao Jiang, Xiaohong Guan

In this paper, we intend to minimize the energy cost of an HVAC system in a multi-zone commercial building under dynamic pricing with the consideration of random zone occupancy, thermal comfort, and indoor air quality comfort.

reinforcement-learning Reinforcement Learning (RL)

Optimizing Privacy-Preserving Outsourced Convolutional Neural Network Predictions

no code implementations22 Feb 2020 Minghui Li, Sherman S. M. Chow, Shengshan Hu, Yuejing Yan, Chao Shen, Qian Wang

This paper proposes a new scheme for privacy-preserving neural network prediction in the outsourced setting, i. e., the server cannot learn the query, (intermediate) results, and the model.

Privacy Preserving

Shielding Collaborative Learning: Mitigating Poisoning Attacks through Client-Side Detection

no code implementations29 Oct 2019 Lingchen Zhao, Shengshan Hu, Qian Wang, Jianlin Jiang, Chao Shen, Xiangyang Luo, Pengfei Hu

Collaborative learning allows multiple clients to train a joint model without sharing their data with each other.

Adversarial Example Detection by Classification for Deep Speech Recognition

1 code implementation22 Oct 2019 Saeid Samizade, Zheng-Hua Tan, Chao Shen, Xiaohong Guan

Machine Learning systems are vulnerable to adversarial attacks and will highly likely produce incorrect outputs under these attacks.

Classification General Classification +3

Seeing is Not Believing: Camouflage Attacks on Image Scaling Algorithms

no code implementations USENIX Security Symposium 2019 Qixue Xiao, Yufei Chen, Chao Shen, Yu Chen, Kang Li

We also present an algorithm that can successfully enable attacks against famous cloud-based image services (such as those from Microsoft Azure, Aliyun, Baidu, and Tencent) and cause obvious misclassification effects, even when the details of image processing (such as the exact scaling algorithm and scale dimension parameters) are hidden in the cloud.

Data Poisoning Image Classification

Can a composite heart rate variability biomarker shed new insights about autism spectrum disorder in school-aged children?

1 code implementation24 Aug 2018 Martin G. Frasch, Chao Shen, Hau-Tieng Wu, Alexander Mueller, Emily Neuhaus, Raphael A. Bernier, Dana Kamara, Theodore P. Beauchaine

High-frequency heart rate variability (HRV) has identified parasympathetic nervous system alterations in autism spectrum disorder (ASD).

Quantitative Methods Neurons and Cognition

WristAuthen: A Dynamic Time Wrapping Approach for User Authentication by Hand-Interaction through Wrist-Worn Devices

no code implementations22 Oct 2017 Qi Lyu, Zhifeng Kong, Chao Shen, Tianwei Yue

This paper presents a novel user authentication system through wrist-worn devices by analyzing the interaction behavior with users, which is both accurate and efficient for future usage.

Hopf insulators and their topologically protected surface states

no code implementations18 Nov 2013 Dong-Ling Deng, Sheng-Tao Wang, Chao Shen, Lu-Ming Duan

Three-dimensional (3D) topological insulators in general need to be protected by certain kinds of symmetries other than the presumed U(1) charge conservation.

Cannot find the paper you are looking for? You can Submit a new open access paper.