Search Results for author: Cory Cornelius

Found 10 papers, 5 papers with code

Investigating the Adversarial Robustness of Density Estimation Using the Probability Flow ODE

no code implementations10 Oct 2023 Marius Arvinte, Cory Cornelius, Jason Martin, Nageen Himayat

Beyond their impressive sampling capabilities, score-based diffusion models offer a powerful analysis tool in the form of unbiased density estimation of a query sample under the training data distribution.

Adversarial Robustness Density Estimation

Robust Principles: Architectural Design Principles for Adversarially Robust CNNs

1 code implementation30 Aug 2023 Shengyun Peng, Weilin Xu, Cory Cornelius, Matthew Hull, Kevin Li, Rahul Duggal, Mansi Phute, Jason Martin, Duen Horng Chau

Our research aims to unify existing works' diverging opinions on how architectural components affect the adversarial robustness of CNNs.

Adversarial Robustness

LLM Self Defense: By Self Examination, LLMs Know They Are Being Tricked

no code implementations14 Aug 2023 Mansi Phute, Alec Helbling, Matthew Hull, Shengyun Peng, Sebastian Szyller, Cory Cornelius, Duen Horng Chau

We test LLM Self Defense on GPT 3. 5 and Llama 2, two of the current most prominent LLMs against various types of attacks, such as forcefully inducing affirmative responses to prompts and prompt engineering attacks.

Language Modelling Large Language Model +2

RobArch: Designing Robust Architectures against Adversarial Attacks

1 code implementation8 Jan 2023 Shengyun Peng, Weilin Xu, Cory Cornelius, Kevin Li, Rahul Duggal, Duen Horng Chau, Jason Martin

Adversarial Training is the most effective approach for improving the robustness of Deep Neural Networks (DNNs).

Membership-Doctor: Comprehensive Assessment of Membership Inference Against Machine Learning Models

no code implementations22 Aug 2022 Xinlei He, Zheng Li, Weilin Xu, Cory Cornelius, Yang Zhang

Finally, we find that data augmentation degrades the performance of existing attacks to a larger extent, and we propose an adaptive attack using augmentation to train shadow and attack models that improve attack performance.

Data Augmentation

Synthetic Dataset Generation for Adversarial Machine Learning Research

1 code implementation21 Jul 2022 Xiruo Liu, Shibani Singh, Cory Cornelius, Colin Busho, Mike Tan, Anindya Paul, Jason Martin

Existing adversarial example research focuses on digitally inserted perturbations on top of existing natural image datasets.

BIG-bench Machine Learning

Toward Few-step Adversarial Training from a Frequency Perspective

no code implementations13 Oct 2020 Hans Shih-Han Wang, Cory Cornelius, Brandon Edwards, Jason Martin

We investigate adversarial-sample generation methods from a frequency domain perspective and extend standard $l_{\infty}$ Projected Gradient Descent (PGD) to the frequency domain.

Talk Proposal: Towards the Realistic Evaluation of Evasion Attacks using CARLA

3 code implementations18 Apr 2019 Cory Cornelius, Shang-Tse Chen, Jason Martin, Duen Horng Chau

In this talk we describe our content-preserving attack on object detectors, ShapeShifter, and demonstrate how to evaluate this threat in realistic scenarios.

The Efficacy of SHIELD under Different Threat Models

no code implementations1 Feb 2019 Cory Cornelius, Nilaksh Das, Shang-Tse Chen, Li Chen, Michael E. Kounavis, Duen Horng Chau

To evaluate the robustness of the defense against an adaptive attacker, we consider the targeted-attack success rate of the Projected Gradient Descent (PGD) attack, which is a strong gradient-based adversarial attack proposed in adversarial machine learning research.

Adversarial Attack Image Classification

ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector

3 code implementations16 Apr 2018 Shang-Tse Chen, Cory Cornelius, Jason Martin, Duen Horng Chau

Given the ability to directly manipulate image pixels in the digital input space, an adversary can easily generate imperceptible perturbations to fool a Deep Neural Network (DNN) image classifier, as demonstrated in prior work.

Adversarial Attack Autonomous Vehicles +5

Cannot find the paper you are looking for? You can Submit a new open access paper.