Search Results for author: Patrick McDaniel

Found 33 papers, 13 papers with code

Explorations in Texture Learning

1 code implementation14 Mar 2024 Blaine Hoak, Patrick McDaniel

In this work, we investigate \textit{texture learning}: the identification of textures learned by object classification models, and the extent to which they rely on these textures.

Object

A New Era in LLM Security: Exploring Security Concerns in Real-World LLM-based Systems

no code implementations28 Feb 2024 Fangzhou Wu, Ning Zhang, Somesh Jha, Patrick McDaniel, Chaowei Xiao

Large Language Model (LLM) systems are inherently compositional, with individual LLM serving as the core foundation with additional layers of objects such as plugins, sandbox, and so on.

Language Modelling Large Language Model

Mitigating Fine-tuning Jailbreak Attack with Backdoor Enhanced Alignment

no code implementations22 Feb 2024 Jiongxiao Wang, Jiazhao Li, Yiquan Li, Xiangyu Qi, Junjie Hu, Yixuan Li, Patrick McDaniel, Muhao Chen, Bo Li, Chaowei Xiao

Despite the general capabilities of Large Language Models (LLMs) like GPT-4 and Llama-2, these models still request fine-tuning or adaptation with customized data when it comes to meeting the specific business demands and intricacies of tailored use cases.

The Efficacy of Transformer-based Adversarial Attacks in Security Domains

no code implementations17 Oct 2023 Kunyang Li, Kyle Domico, Jean-Charles Noirot Ferrand, Patrick McDaniel

The transferability of these adversarial examples is measured by evaluating each set on other models to determine which models offer more adversarial strength, and consequently, more robustness against these attacks.

Malware Detection Network Intrusion Detection

The Space of Adversarial Strategies

no code implementations9 Sep 2022 Ryan Sheatsley, Blaine Hoak, Eric Pauley, Patrick McDaniel

From our evaluation we find that attack performance to be highly contextual: the domain, model robustness, and threat model can have a profound influence on attack efficacy.

Adversarial Plannning

no code implementations1 May 2022 Valentin Vie, Ryan Sheatsley, Sophia Beyda, Sushrut Shringarputale, Kevin Chan, Trent Jaeger, Patrick McDaniel

We evaluate the performance of the algorithms against two dominant planning algorithms used in commercial applications (D* Lite and Fast Downward) and show both are vulnerable to extremely limited adversarial action.

Autonomous Vehicles Management

HoneyModels: Machine Learning Honeypots

no code implementations21 Feb 2022 Ahmed Abdou, Ryan Sheatsley, Yohan Beugin, Tyler Shipp, Patrick McDaniel

To harden these systems the ever-growing field of Adversarial Machine Learning has proposed new attack and defense mechanisms.

BIG-bench Machine Learning Computational Efficiency

Improving Radioactive Material Localization by Leveraging Cyber-Security Model Optimizations

no code implementations21 Feb 2022 Ryan Sheatsley, Matthew Durbin, Azaree Lintereur, Patrick McDaniel

With four and eight detector arrays, we collect counts of gamma-rays as features for a suite of machine learning models to localize radioactive material.

Malware Detection

On the Robustness of Domain Constraints

no code implementations18 May 2021 Ryan Sheatsley, Blaine Hoak, Eric Pauley, Yohan Beugin, Michael J. Weisman, Patrick McDaniel

Machine learning is vulnerable to adversarial examples-inputs designed to cause models to perform poorly.

valid

Adversarial Examples in Constrained Domains

no code implementations2 Nov 2020 Ryan Sheatsley, Nicolas Papernot, Michael Weisman, Gunjan Verma, Patrick McDaniel

To assess how these algorithms perform, we evaluate them in constrained (e. g., network intrusion detection) and unconstrained (e. g., image recognition) domains.

Network Intrusion Detection

IoTRepair: Systematically Addressing Device Faults in Commodity IoT (Extended Paper)

no code implementations17 Feb 2020 Michael Norris, Berkay Celik, Patrick McDaniel, Gang Tan, Prasanna Venkatesh, Shulin Zhao, Anand Sivasubramaniam

IoT devices are decentralized and deployed in un-stable environments, which causes them to be prone to various kinds of faults, such as device failure and network disruption.

Software Engineering Performance

Real-time Analysis of Privacy-(un)aware IoT Applications

no code implementations24 Nov 2019 Leonardo Babun, Z. Berkay Celik, Patrick McDaniel, A. Selcuk Uluagac

We designed and built IoTWatcH based on an IoT privacy survey that considers the privacy needs of IoT users.

KRATOS: Multi-User Multi-Device-Aware Access Control System for the Smart Home

no code implementations22 Nov 2019 Amit Kumar Sikder, Leonardo Babun, Z. Berkay Celik, Abbas Acar, Hidayet Aksu, Patrick McDaniel, Engin Kirda, A. Selcuk Uluagac

Users can specify their desired access control settings using the interaction module which are translated into access control policies in the backend server.

Cryptography and Security

How Relevant is the Turing Test in the Age of Sophisbots?

no code implementations30 Aug 2019 Dan Boneh, Andrew J. Grotto, Patrick McDaniel, Nicolas Papernot

Popular culture has contemplated societies of thinking machines for generations, envisioning futures from utopian to dystopian.

Cultural Vocal Bursts Intensity Prediction

IoTSan: Fortifying the Safety of IoT Systems

1 code implementation22 Oct 2018 Dang Tu Nguyen, Chengyu Song, Zhiyun Qian, Srikanth V. Krishnamurthy, Edward J. M. Colbert, Patrick McDaniel

In this paper, we design IoTSan, a novel practical system that uses model checking as a building block to reveal "interaction-level" flaws by identifying events that can lead the system to unsafe states.

Cryptography and Security

Program Analysis of Commodity IoT Applications for Security and Privacy: Challenges and Opportunities

1 code implementation18 Sep 2018 Z. Berkay Celik, Earlence Fernandes, Eric Pauley, Gang Tan, Patrick McDaniel

Based on a study of five IoT programming platforms, we identify the key insights resulting from works in both the program analysis and security communities and relate the efficacy of program-analysis techniques to security and privacy issues.

Cryptography and Security Programming Languages

Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning

4 code implementations13 Mar 2018 Nicolas Papernot, Patrick McDaniel

However, deep learning is often criticized for its lack of robustness in adversarial settings (e. g., vulnerability to adversarial inputs) and general inability to rationalize its predictions.

Machine Translation Malware Detection

Sensitive Information Tracking in Commodity IoT

1 code implementation22 Feb 2018 Z. Berkay Celik, Leonardo Babun, Amit K. Sikder, Hidayet Aksu, Gang Tan, Patrick McDaniel, A. Selcuk Uluagac

Through this effort, we introduce a rigorously grounded framework for evaluating the use of sensitive information in IoT apps---and therein provide developers, markets, and consumers a means of identifying potential threats to security and privacy.

Cryptography and Security Programming Languages

Ensemble Adversarial Training: Attacks and Defenses

11 code implementations ICLR 2018 Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, Patrick McDaniel

We show that this form of adversarial training converges to a degenerate global minimum, wherein small curvature artifacts near the data points obfuscate a linear approximation of the loss.

Extending Defensive Distillation

1 code implementation15 May 2017 Nicolas Papernot, Patrick McDaniel

Machine learning is vulnerable to adversarial examples: inputs carefully modified to force misclassification.

BIG-bench Machine Learning

The Space of Transferable Adversarial Examples

2 code implementations11 Apr 2017 Florian Tramèr, Nicolas Papernot, Ian Goodfellow, Dan Boneh, Patrick McDaniel

Adversarial examples are maliciously perturbed inputs designed to mislead machine learning (ML) models at test-time.

On the (Statistical) Detection of Adversarial Examples

no code implementations21 Feb 2017 Kathrin Grosse, Praveen Manoharan, Nicolas Papernot, Michael Backes, Patrick McDaniel

Specifically, we augment our ML model with an additional output, in which the model is trained to classify all adversarial inputs.

Malware Classification Network Intrusion Detection

Patient-Driven Privacy Control through Generalized Distillation

no code implementations26 Nov 2016 Z. Berkay Celik, David Lopez-Paz, Patrick McDaniel

In this paper, we present privacy distillation, a mechanism which allows patients to control the type and amount of information they wish to disclose to the healthcare providers for use in statistical models.

Towards the Science of Security and Privacy in Machine Learning

no code implementations11 Nov 2016 Nicolas Papernot, Patrick McDaniel, Arunesh Sinha, Michael Wellman

Advances in machine learning (ML) in recent years have enabled a dizzying array of applications such as data analytics, autonomous systems, and security diagnostics.

BIG-bench Machine Learning Decision Making

Adversarial Perturbations Against Deep Neural Networks for Malware Classification

no code implementations14 Jun 2016 Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, Patrick McDaniel

Deep neural networks, like many other machine learning models, have recently been shown to lack robustness against adversarially crafted inputs.

BIG-bench Machine Learning Classification +3

Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples

no code implementations24 May 2016 Nicolas Papernot, Patrick McDaniel, Ian Goodfellow

We demonstrate our attacks on two commercial machine learning classification systems from Amazon (96. 19% misclassification rate) and Google (88. 94%) using only 800 queries of the victim model, thereby showing that existing machine learning approaches are in general vulnerable to systematic black-box attacks regardless of their structure.

BIG-bench Machine Learning

Crafting Adversarial Input Sequences for Recurrent Neural Networks

1 code implementation28 Apr 2016 Nicolas Papernot, Patrick McDaniel, Ananthram Swami, Richard Harang

Machine learning models are frequently used to solve complex security problems, as well as to make decisions in sensitive situations like guiding autonomous vehicles or predicting financial market behaviors.

Autonomous Vehicles BIG-bench Machine Learning +1

Detection under Privileged Information

no code implementations31 Mar 2016 Z. Berkay Celik, Patrick McDaniel, Rauf Izmailov, Nicolas Papernot, Ryan Sheatsley, Raquel Alvarez, Ananthram Swami

In this paper, we consider an alternate learning approach that trains models using "privileged" information--features available at training time but not at runtime--to improve the accuracy and resilience of detection systems.

Face Recognition Malware Classification +1

Practical Black-Box Attacks against Machine Learning

17 code implementations8 Feb 2016 Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, Ananthram Swami

Our attack strategy consists in training a local model to substitute for the target DNN, using inputs synthetically generated by an adversary and labeled by the target DNN.

BIG-bench Machine Learning

The Limitations of Deep Learning in Adversarial Settings

11 code implementations24 Nov 2015 Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, Ananthram Swami

In this work, we formalize the space of adversaries against deep neural networks (DNNs) and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs.

Adversarial Attack Adversarial Defense

Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks

2 code implementations14 Nov 2015 Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, Ananthram Swami

In this work, we introduce a defensive mechanism called defensive distillation to reduce the effectiveness of adversarial samples on DNNs.

Autonomous Vehicles BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.