Search Results for author: Chaowei Xiao

Found 23 papers, 9 papers with code

AugMax: Adversarial Composition of Random Augmentations for Robust Training

1 code implementation26 Oct 2021 Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Anima Anandkumar, Zhangyang Wang

Diversity and hardness are two complementary dimensions of data augmentation to achieve robustness.

Auditing AI models for Verified Deployment under Semantic Specifications

no code implementations25 Sep 2021 Homanga Bharadhwaj, De-An Huang, Chaowei Xiao, Anima Anandkumar, Animesh Garg

Auditing trained deep learning (DL) models prior to deployment is vital in preventing unintended consequences.

Face Recognition

Long-Short Transformer: Efficient Transformers for Language and Vision

3 code implementations5 Jul 2021 Chen Zhu, Wei Ping, Chaowei Xiao, Mohammad Shoeybi, Tom Goldstein, Anima Anandkumar, Bryan Catanzaro

For instance, Transformer-LS achieves 0. 97 test BPC on enwik8 using half the number of parameters than previous method, while being faster and is able to handle 3x as long sequences compared to its full-attention version on the same hardware.

Language Modelling

Practical Machine Learning Safety: A Survey and Primer

no code implementations9 Jun 2021 Sina Mohseni, Haotao Wang, Zhiding Yu, Chaowei Xiao, Zhangyang Wang, Jay Yadawa

The open-world deployment of Machine Learning (ML) algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities such as interpretability, verifiability, and performance limitations.

Autonomous Vehicles Domain Adaptation

Can Shape Structure Features Improve Model Robustness Under Diverse Adversarial Settings?

no code implementations ICCV 2021 MingJie Sun, Zichao Li, Chaowei Xiao, Haonan Qiu, Bhavya Kailkhura, Mingyan Liu, Bo Li

Specifically, EdgeNetRob and EdgeGANRob first explicitly extract shape structure features from a given image via an edge detection algorithm.

Edge Detection

Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations

3 code implementations NeurIPS 2020 Huan Zhang, Hongge Chen, Chaowei Xiao, Bo Li, Mingyan Liu, Duane Boning, Cho-Jui Hsieh

Several works have shown this vulnerability via adversarial attacks, but existing approaches on improving the robustness of DRL under this setting have limited success and lack for theoretical principles.

AdvIT: Adversarial Frames Identifier Based on Temporal Consistency in Videos

no code implementations ICCV 2019 Chaowei Xiao, Ruizhi Deng, Bo Li, Taesung Lee, Benjamin Edwards, Jinfeng Yi, Dawn Song, Mingyan Liu, Ian Molloy

In particular, we apply optical flow estimation to the target and previous frames to generate pseudo frames and evaluate the consistency of the learner output between these pseudo frames and target.

Action Recognition Autonomous Driving +6

Characterizing Attacks on Deep Reinforcement Learning

no code implementations21 Jul 2019 Chaowei Xiao, Xinlei Pan, Warren He, Jian Peng, Ming-Jie Sun, Jin-Feng Yi, Mingyan Liu, Bo Li, Dawn Song

In addition to current observation based attacks against DRL, we propose the first targeted attacks based on action space and environment dynamics.

Autonomous Driving

Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving

no code implementations16 Jul 2019 Yulong Cao, Chaowei Xiao, Benjamin Cyr, Yimeng Zhou, Won Park, Sara Rampazzi, Qi Alfred Chen, Kevin Fu, Z. Morley Mao

In contrast to prior work that concentrates on camera-based perception, in this work we perform the first security study of LiDAR-based perception in AV settings, which is highly important but unexplored.

Autonomous Driving Object Detection

Adversarial Objects Against LiDAR-Based Autonomous Driving Systems

no code implementations11 Jul 2019 Yulong Cao, Chaowei Xiao, Dawei Yang, Jing Fang, Ruigang Yang, Mingyan Liu, Bo Li

Deep neural networks (DNNs) are found to be vulnerable against adversarial examples, which are carefully crafted inputs with a small magnitude of perturbation aiming to induce arbitrarily incorrect predictions.

Autonomous Driving

SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing

1 code implementation19 Jun 2019 Haonan Qiu, Chaowei Xiao, Lei Yang, Xinchen Yan, Honglak Lee, Bo Li

In this paper, we aim to explore the impact of semantic manipulation on DNNs predictions by manipulating the semantic attributes of images and generate "unrestricted adversarial examples".

Face Recognition Face Verification

Towards Stable and Efficient Training of Verifiably Robust Neural Networks

2 code implementations ICLR 2020 Huan Zhang, Hongge Chen, Chaowei Xiao, Sven Gowal, Robert Stanforth, Bo Li, Duane Boning, Cho-Jui Hsieh

In this paper, we propose a new certified adversarial training method, CROWN-IBP, by combining the fast IBP bounds in a forward bounding pass and a tight linear relaxation based bound, CROWN, in a backward bounding pass.

Application-driven Privacy-preserving Data Publishing with Correlated Attributes

no code implementations26 Dec 2018 Aria Rezaei, Chaowei Xiao, Jie Gao, Bo Li, Sirajum Munir

To address the privacy concerns of users in this environment, we propose a novel framework called PR-GAN that offers privacy-preserving mechanism using generative adversarial networks.

Data Poisoning Attack against Unsupervised Node Embedding Methods

no code implementations30 Oct 2018 Mingjie Sun, Jian Tang, Huichen Li, Bo Li, Chaowei Xiao, Yao Chen, Dawn Song

In this paper, we take the task of link prediction as an example, which is one of the most fundamental problems for graph analysis, and introduce a data positioning attack to node embedding methods.

Data Poisoning Link Prediction

MeshAdv: Adversarial Meshes for Visual Recognition

no code implementations CVPR 2019 Chaowei Xiao, Dawei Yang, Bo Li, Jia Deng, Mingyan Liu

Highly expressive models such as deep neural networks (DNNs) have been widely applied to various applications.

Robust Physical-World Attacks on Deep Learning Visual Classification

no code implementations CVPR 2018 Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, Dawn Song

Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input.

Classification General Classification

Performing Co-Membership Attacks Against Deep Generative Models

no code implementations24 May 2018 Kin Sum Liu, Chaowei Xiao, Bo Li, Jie Gao

We conduct extensive experiments on a variety of datasets and generative models showing that: our attacker network outperforms prior membership attacks; co-membership attacks can be substantially more powerful than single attacks; and VAEs are more susceptible to membership attacks compared to GANs.

Spatially Transformed Adversarial Examples

3 code implementations ICLR 2018 Chaowei Xiao, Jun-Yan Zhu, Bo Li, Warren He, Mingyan Liu, Dawn Song

Perturbations generated through spatial transformation could result in large $\mathcal{L}_p$ distance measures, but our extensive experiments show that such spatially transformed adversarial examples are perceptually realistic and more difficult to defend against with existing defense systems.

Improving Robustness of ML Classifiers against Realizable Evasion Attacks Using Conserved Features

no code implementations28 Aug 2017 Liang Tong, Bo Li, Chen Hajaj, Chaowei Xiao, Ning Zhang, Yevgeniy Vorobeychik

A conventional approach to evaluate ML robustness to such attacks, as well as to design robust ML, is by considering simplified feature-space models of attacks, where the attacker changes ML features directly to effect evasion, while minimizing or constraining the magnitude of this change.

Intrusion Detection Malware Detection

Robust Physical-World Attacks on Deep Learning Models

1 code implementation27 Jul 2017 Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, Dawn Song

We propose a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions.

Cannot find the paper you are looking for? You can Submit a new open access paper.