Search Results for author: Blaine Nelson

Found 6 papers, 3 papers with code

Tree of Attacks: Jailbreaking Black-Box LLMs Automatically

1 code implementation4 Dec 2023 Anay Mehrotra, Manolis Zampetakis, Paul Kassianik, Blaine Nelson, Hyrum Anderson, Yaron Singer, Amin Karbasi

In this work, we present Tree of Attacks with Pruning (TAP), an automated method for generating jailbreaks that only requires black-box access to the target LLM.

Navigate

Support Vector Machines under Adversarial Label Contamination

no code implementations1 Jun 2022 Huang Xiao, Battista Biggio, Blaine Nelson, Han Xiao, Claudia Eckert, Fabio Roli

Machine learning algorithms are increasingly being applied in security-related tasks such as spam and malware detection, although their security properties against deliberate attacks have not yet been widely understood.

Active Learning BIG-bench Machine Learning +1

Evasion Attacks against Machine Learning at Test Time

1 code implementation21 Aug 2017 Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Srndic, Pavel Laskov, Giorgio Giacinto, Fabio Roli

In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data.

BIG-bench Machine Learning Malware Detection +1

Security Evaluation of Support Vector Machines in Adversarial Environments

no code implementations30 Jan 2014 Battista Biggio, Igino Corona, Blaine Nelson, Benjamin I. P. Rubinstein, Davide Maiorca, Giorgio Fumera, Giorgio Giacinto, and Fabio Roli

Support Vector Machines (SVMs) are among the most popular classification techniques adopted in security applications like malware detection, intrusion detection, and spam filtering.

Intrusion Detection Malware Detection

Bayesian Differential Privacy through Posterior Sampling

no code implementations5 Jun 2013 Christos Dimitrakakis, Blaine Nelson, and Zuhe Zhang, Aikaterini Mitrokotsa, Benjamin Rubinstein

All our general results hold for arbitrary database metrics, including those for the common definition of differential privacy.

Bayesian Inference Privacy Preserving

Poisoning Attacks against Support Vector Machines

1 code implementation27 Jun 2012 Battista Biggio, Blaine Nelson, Pavel Laskov

Such attacks inject specially crafted training data that increases the SVM's test error.

Cannot find the paper you are looking for? You can Submit a new open access paper.