Search Results for author: Yulong Cao

Found 15 papers, 0 papers with code

Adversarial Objects Against LiDAR-Based Autonomous Driving Systems

no code implementations11 Jul 2019 Yulong Cao, Chaowei Xiao, Dawei Yang, Jing Fang, Ruigang Yang, Mingyan Liu, Bo Li

Deep neural networks (DNNs) are found to be vulnerable against adversarial examples, which are carefully crafted inputs with a small magnitude of perturbation aiming to induce arbitrarily incorrect predictions.

Autonomous Driving

Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving

no code implementations16 Jul 2019 Yulong Cao, Chaowei Xiao, Benjamin Cyr, Yimeng Zhou, Won Park, Sara Rampazzi, Qi Alfred Chen, Kevin Fu, Z. Morley Mao

In contrast to prior work that concentrates on camera-based perception, in this work we perform the first security study of LiDAR-based perception in AV settings, which is highly important but unexplored.

Autonomous Driving BIG-bench Machine Learning +2

Towards Robust LiDAR-based Perception in Autonomous Driving: General Black-box Adversarial Sensor Attack and Countermeasures

no code implementations30 Jun 2020 Jiachen Sun, Yulong Cao, Qi Alfred Chen, Z. Morley Mao

In this work, we perform the first study to explore the general vulnerability of current LiDAR-based perception architectures and discover that the ignored occlusion patterns in LiDAR point clouds make self-driving cars vulnerable to spoofing attacks.

Autonomous Driving Self-Driving Cars

On The Adversarial Robustness of 3D Point Cloud Classification

no code implementations28 Sep 2020 Jiachen Sun, Karl Koenig, Yulong Cao, Qi Alfred Chen, Zhuoqing Mao

Since adversarial training (AT) is believed to be the most effective defense, we present the first in-depth study showing how AT behaves in point cloud classification and identify that the required symmetric function (pooling operation) is paramount to the model's robustness under AT.

3D Point Cloud Classification Adversarial Robustness +3

On Adversarial Robustness of 3D Point Cloud Classification under Adaptive Attacks

no code implementations24 Nov 2020 Jiachen Sun, Karl Koenig, Yulong Cao, Qi Alfred Chen, Z. Morley Mao

Since adversarial training (AT) is believed as the most robust defense, we present the first in-depth study showing how AT behaves in point cloud classification and identify that the required symmetric function (pooling operation) is paramount to the 3D model's robustness under AT.

3D Point Cloud Classification Adversarial Robustness +3

Adversarially Robust 3D Point Cloud Recognition Using Self-Supervisions

no code implementations NeurIPS 2021 Jiachen Sun, Yulong Cao, Christopher B. Choy, Zhiding Yu, Anima Anandkumar, Zhuoqing Morley Mao, Chaowei Xiao

In this paper, we systematically study the impact of various self-supervised learning proxy tasks on different architectures and threat models for 3D point clouds with adversarial training.

Adversarial Robustness Autonomous Driving +1

Robust Trajectory Prediction against Adversarial Attacks

no code implementations29 Jul 2022 Yulong Cao, Danfei Xu, Xinshuo Weng, Zhuoqing Mao, Anima Anandkumar, Chaowei Xiao, Marco Pavone

We demonstrate that our method is able to improve the performance by 46% on adversarial data and at the cost of only 3% performance degradation on clean data, compared to the model trained with clean data.

Autonomous Driving Data Augmentation +1

Language-Guided Traffic Simulation via Scene-Level Diffusion

no code implementations10 Jun 2023 Ziyuan Zhong, Davis Rempe, Yuxiao Chen, Boris Ivanovic, Yulong Cao, Danfei Xu, Marco Pavone, Baishakhi Ray

Realistic and controllable traffic simulation is a core capability that is necessary to accelerate autonomous vehicle (AV) development.

Language Modelling Large Language Model

Reinforcement Learning with Human Feedback for Realistic Traffic Simulation

no code implementations1 Sep 2023 Yulong Cao, Boris Ivanovic, Chaowei Xiao, Marco Pavone

This works aims to address this by developing a framework that employs reinforcement learning with human preference (RLHF) to enhance the realism of existing traffic models.

reinforcement-learning

ADoPT: LiDAR Spoofing Attack Detection Based on Point-Level Temporal Consistency

no code implementations23 Oct 2023 Minkyoung Cho, Yulong Cao, Zixiang Zhou, Z. Morley Mao

Deep neural networks (DNNs) are increasingly integrated into LiDAR (Light Detection and Ranging)-based perception systems for autonomous vehicles (AVs), requiring robust performance under adversarial conditions.

Anomaly Detection Autonomous Vehicles

Dolphins: Multimodal Language Model for Driving

no code implementations1 Dec 2023 Yingzi Ma, Yulong Cao, Jiachen Sun, Marco Pavone, Chaowei Xiao

The quest for fully autonomous vehicles (AVs) capable of navigating complex real-world scenarios with human-like understanding and responsiveness.

Autonomous Vehicles In-Context Learning +1

RealGen: Retrieval Augmented Generation for Controllable Traffic Scenarios

no code implementations19 Dec 2023 Wenhao Ding, Yulong Cao, Ding Zhao, Chaowei Xiao, Marco Pavone

Simulation plays a crucial role in the development of autonomous vehicles (AVs) due to the potential risks associated with real-world testing.

Autonomous Vehicles In-Context Learning +1

WIPI: A New Web Threat for LLM-Driven Web Agents

no code implementations26 Feb 2024 Fangzhou Wu, Shutong Wu, Yulong Cao, Chaowei Xiao

To evaluate the effectiveness of the proposed methodology, we conducted extensive experiments using 7 plugin-based ChatGPT Web Agents, 8 Web GPTs, and 3 different open-source Web Agents.

Cannot find the paper you are looking for? You can Submit a new open access paper.