Search Results for author: Chawin Sitawarin

Found 22 papers, 16 papers with code

Vulnerability Detection with Code Language Models: How Far Are We?

1 code implementation27 Mar 2024 Yangruibo Ding, Yanjun Fu, Omniyyah Ibrahim, Chawin Sitawarin, Xinyun Chen, Basel Alomair, David Wagner, Baishakhi Ray, Yizheng Chen

Evaluating code LMs on PrimeVul reveals that existing benchmarks significantly overestimate the performance of these models.

Vulnerability Detection

PAL: Proxy-Guided Black-Box Attack on Large Language Models

1 code implementation15 Feb 2024 Chawin Sitawarin, Norman Mu, David Wagner, Alexandre Araujo

In this work, we introduce the Proxy-Guided Attack on LLMs (PAL), the first optimization-based attack on LLMs in a black-box query-only setting.

Jatmo: Prompt Injection Defense by Task-Specific Finetuning

1 code implementation29 Dec 2023 Julien Piet, Maha Alrashed, Chawin Sitawarin, Sizhe Chen, Zeming Wei, Elizabeth Sun, Basel Alomair, David Wagner

Jatmo only needs a task prompt and a dataset of inputs for the task: it uses the teacher model to generate outputs.

Instruction Following

Mark My Words: Analyzing and Evaluating Language Model Watermarks

1 code implementation1 Dec 2023 Julien Piet, Chawin Sitawarin, Vivian Fang, Norman Mu, David Wagner

The capabilities of large language models have grown significantly in recent years and so too have concerns about their misuse.

Language Modelling

PubDef: Defending Against Transfer Attacks From Public Models

1 code implementation26 Oct 2023 Chawin Sitawarin, Jaewon Chang, David Huang, Wesson Altoyan, David Wagner

We evaluate the transfer attacks in this setting and propose a specialized defense method based on a game-theoretic perspective.

SPDER: Semiperiodic Damping-Enabled Object Representation

no code implementations27 Jun 2023 Kathan Shah, Chawin Sitawarin

We present a neural network architecture designed to naturally learn a positional embedding and overcome the spectral bias towards lower frequencies faced by conventional implicit neural representation networks.

Image Super-Resolution Object +1

REAP: A Large-Scale Realistic Adversarial Patch Benchmark

1 code implementation ICCV 2023 Nabeel Hingun, Chawin Sitawarin, Jerry Li, David Wagner

In this work, we propose the REAP (REalistic Adversarial Patch) benchmark, a digital benchmark that allows the user to evaluate patch attacks on real images, and under real-world conditions.

Preprocessors Matter! Realistic Decision-Based Attacks on Machine Learning Systems

1 code implementation7 Oct 2022 Chawin Sitawarin, Florian Tramèr, Nicholas Carlini

Decision-based attacks construct adversarial examples against a machine learning (ML) model by making only hard-label queries.

Part-Based Models Improve Adversarial Robustness

1 code implementation15 Sep 2022 Chawin Sitawarin, Kornrapat Pongmala, Yizheng Chen, Nicholas Carlini, David Wagner

We show that combining human prior knowledge with end-to-end learning can improve the robustness of deep neural networks by introducing a part-based model for object classification.

Adversarial Robustness

Adversarial Examples for k-Nearest Neighbor Classifiers Based on Higher-Order Voronoi Diagrams

no code implementations NeurIPS 2021 Chawin Sitawarin, Evgenios M. Kornaropoulos, Dawn Song, David Wagner

On a high level, the search radius expands to the nearby higher-order Voronoi cells until we find a cell that classifies differently from the input point.

Adversarial Robustness

Adversarial Examples for $k$-Nearest Neighbor Classifiers Based on Higher-Order Voronoi Diagrams

1 code implementation NeurIPS 2021 Chawin Sitawarin, Evgenios M. Kornaropoulos, Dawn Song, David Wagner

On a high level, the search radius expands to the nearby Voronoi cells until we find a cell that classifies differently from the input point.

Adversarial Robustness

SAT: Improving Adversarial Training via Curriculum-Based Loss Smoothing

no code implementations18 Mar 2020 Chawin Sitawarin, Supriyo Chakraborty, David Wagner

This leads to a significant improvement in both clean accuracy and robustness compared to AT, TRADES, and other baselines.

Adversarial Robustness

Minimum-Norm Adversarial Examples on KNN and KNN-Based Models

1 code implementation14 Mar 2020 Chawin Sitawarin, David Wagner

We study the robustness against adversarial examples of kNN classifiers and classifiers that combine kNN with neural networks.

Defending Against Adversarial Examples with K-Nearest Neighbor

1 code implementation23 Jun 2019 Chawin Sitawarin, David Wagner

With our models, the mean perturbation norm required to fool our MNIST model is 3. 07 and 2. 30 on CIFAR-10.

Better the Devil you Know: An Analysis of Evasion Attacks using Out-of-Distribution Adversarial Examples

no code implementations5 May 2019 Vikash Sehwag, Arjun Nitin Bhagoji, Liwei Song, Chawin Sitawarin, Daniel Cullina, Mung Chiang, Prateek Mittal

A large body of recent work has investigated the phenomenon of evasion attacks using adversarial examples for deep learning systems, where the addition of norm-bounded perturbations to the test inputs leads to incorrect output classification.

Autonomous Driving General Classification

On the Robustness of Deep K-Nearest Neighbors

2 code implementations20 Mar 2019 Chawin Sitawarin, David Wagner

Despite a large amount of attention on adversarial examples, very few works have demonstrated an effective defense against this threat.

DARTS: Deceiving Autonomous Cars with Toxic Signs

1 code implementation18 Feb 2018 Chawin Sitawarin, Arjun Nitin Bhagoji, Arsalan Mosenia, Mung Chiang, Prateek Mittal

In this paper, we propose and examine security attacks against sign recognition systems for Deceiving Autonomous caRs with Toxic Signs (we call the proposed attacks DARTS).

Traffic Sign Recognition

Rogue Signs: Deceiving Traffic Sign Recognition with Malicious Ads and Logos

1 code implementation9 Jan 2018 Chawin Sitawarin, Arjun Nitin Bhagoji, Arsalan Mosenia, Prateek Mittal, Mung Chiang

Our attack pipeline generates adversarial samples which are robust to the environmental conditions and noisy image transformations present in the physical world.

Traffic Sign Recognition

Beyond Grand Theft Auto V for Training, Testing and Enhancing Deep Learning in Self Driving Cars

no code implementations4 Dec 2017 Mark Martinez, Chawin Sitawarin, Kevin Finch, Lennart Meincke, Alex Yablonski, Alain Kornhauser

As an initial assessment, over 480, 000 labeled virtual images of normal highway driving were readily generated in Grand Theft Auto V's virtual environment.

Autonomous Driving Self-Driving Cars +1

Cannot find the paper you are looking for? You can Submit a new open access paper.