Search Results for author: Takami Sato

Found 16 papers, 2 papers with code

Interior-Point Vanishing Problem in Semidefinite Relaxations for Neural Network Verification

no code implementations12 Jun 2025 Ryota Ueda, Takami Sato, Ken Kobayashi, Kazuhide Nakata

Semidefinite programming (SDP) relaxation has emerged as a promising approach for neural network verification, offering tighter bounds than other convex relaxation methods for deep neural networks (DNNs) with ReLU activations.

Revisiting Physical-World Adversarial Attack on Traffic Sign Recognition: A Commercial Systems Perspective

no code implementations15 Sep 2024 Ningfei Wang, Shaoyuan Xie, Takami Sato, Yunpeng Luo, Kaidi Xu, Qi Alfred Chen

We design new attack success metrics that can mathematically model the impacts of such design on the TSR system-level attack success, and use them to revisit existing attacks.

Adversarial Attack Memorization +1

Can We Trust Embodied Agents? Exploring Backdoor Attacks against Embodied LLM-based Decision-Making Systems

no code implementations27 May 2024 Ruochen Jiao, Shaoyuan Xie, Justin Yue, Takami Sato, Lixu Wang, YiXuan Wang, Qi Alfred Chen, Qi Zhu

Specifically, we propose three distinct attack mechanisms: word injection, scenario manipulation, and knowledge injection, targeting various components in the LLM-based decision-making pipeline.

Autonomous Driving Common Sense Reasoning +3

Invisible Reflections: Leveraging Infrared Laser Reflections to Target Traffic Sign Perception

no code implementations7 Jan 2024 Takami Sato, Sri Hrushikesh Varma Bhupathiraju, Michael Clifford, Takeshi Sugawara, Qi Alfred Chen, Sara Rampazzi

We evaluate the effectiveness of the ILR attack with real-world experiments against two major traffic sign recognition architectures on four IR-sensitive cameras.

Traffic Sign Recognition

Intriguing Properties of Diffusion Models: An Empirical Study of the Natural Attack Capability in Text-to-Image Generative Models

no code implementations CVPR 2024 Takami Sato, Justin Yue, Nanze Chen, Ningfei Wang, Qi Alfred Chen

The NDD attack shows a significantly high capability to generate low-cost, model-agnostic, and transferable adversarial attacks by exploiting the natural attack capability in diffusion models.

Denoising Image Generation

Does Physical Adversarial Example Really Matter to Autonomous Driving? Towards System-Level Effect of Adversarial Object Evasion Attack

no code implementations ICCV 2023 Ningfei Wang, Yunpeng Luo, Takami Sato, Kaidi Xu, Qi Alfred Chen

In this work, we conduct the first measurement study on whether and how effectively the existing designs can lead to system-level effects, especially for the STOP sign-evasion attacks due to their popularity and severity.

Autonomous Driving

LiDAR Spoofing Meets the New-Gen: Capability Improvements, Broken Assumptions, and New Attack Strategies

no code implementations19 Mar 2023 Takami Sato, Yuki Hayakawa, Ryo Suzuki, Yohsuke Shiiki, Kentaro Yoshioka, Qi Alfred Chen

To fill these critical research gaps, we conduct the first large-scale measurement study on LiDAR spoofing attack capabilities on object detectors with 9 popular LiDARs, covering both first- and new-generation LiDARs, and 3 major types of object detectors trained on 5 different datasets.

Autonomous Driving Object +2

Learning Representation for Anomaly Detection of Vehicle Trajectories

no code implementations9 Mar 2023 Ruochen Jiao, Juyang Bai, Xiangguo Liu, Takami Sato, Xiaowei Yuan, Qi Alfred Chen, Qi Zhu

We conduct extensive experiments to demonstrate that our supervised method based on contrastive learning and unsupervised method based on reconstruction with semantic latent space can significantly improve the performance of anomalous trajectory detection in their corresponding settings over various baseline methods.

Anomaly Detection Autonomous Driving +3

Semi-supervised Semantics-guided Adversarial Training for Trajectory Prediction

no code implementations27 May 2022 Ruochen Jiao, Xiangguo Liu, Takami Sato, Qi Alfred Chen, Qi Zhu

In addition, experiments show that our method can significantly improve the system's robust generalization to unseen patterns of attacks.

Adversarial Robustness Decision Making +2

Towards Driving-Oriented Metric for Lane Detection Models

1 code implementation CVPR 2022 Takami Sato, Qi Alfred Chen

After the 2017 TuSimple Lane Detection Challenge, its dataset and evaluation based on accuracy and F1 score have become the de facto standard to measure the performance of lane detection methods.

Autonomous Driving Lane Detection

On Robustness of Lane Detection Models to Physical-World Adversarial Attacks in Autonomous Driving

no code implementations6 Jul 2021 Takami Sato, Qi Alfred Chen

We demonstrate that the conventional evaluation fails to reflect the robustness in end-to-end autonomous driving scenarios.

Autonomous Driving Lane Detection

End-to-end Uncertainty-based Mitigation of Adversarial Attacks to Automated Lane Centering

no code implementations27 Feb 2021 Ruochen Jiao, Hengyi Liang, Takami Sato, Junjie Shen, Qi Alfred Chen, Qi Zhu

The experiment results demonstrate that our approach can effectively mitigate the impact of adversarial attacks and can achieve 55% to 90% improvement over the original OpenPilot.

Autonomous Driving

Dirty Road Can Attack: Security of Deep Learning based Automated Lane Centering under Physical-World Attack

no code implementations14 Sep 2020 Takami Sato, Junjie Shen, Ningfei Wang, Yunhan Jack Jia, Xue Lin, Qi Alfred Chen

Automated Lane Centering (ALC) systems are convenient and widely deployed today, but also highly security and safety critical.

Lane Detection

Security of Deep Learning based Lane Keeping System under Physical-World Adversarial Attack

no code implementations3 Mar 2020 Takami Sato, Junjie Shen, Ningfei Wang, Yunhan Jack Jia, Xue Lin, Qi Alfred Chen

Lane-Keeping Assistance System (LKAS) is convenient and widely available today, but also extremely security and safety critical.

Adversarial Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.