Search Results for author: Peter Garraghan

Found 4 papers, 1 papers with code

Compilation as a Defense: Enhancing DL Model Attack Robustness via Tensor Optimization

no code implementations20 Sep 2023 Stefan Trawicki, William Hackett, Lewis Birch, Neeraj Suri, Peter Garraghan

Adversarial Machine Learning (AML) is a rapidly growing field of security research, with an often overlooked area being model attacks through side-channels.

Model Leeching: An Extraction Attack Targeting LLMs

no code implementations19 Sep 2023 Lewis Birch, William Hackett, Stefan Trawicki, Neeraj Suri, Peter Garraghan

Model Leeching is a novel extraction attack targeting Large Language Models (LLMs), capable of distilling task-specific knowledge from a target LLM into a reduced parameter model.

Adversarial Attack

PINCH: An Adversarial Extraction Attack Framework for Deep Learning Models

no code implementations13 Sep 2022 William Hackett, Stefan Trawicki, Zhengxin Yu, Neeraj Suri, Peter Garraghan

Adversarial extraction attacks constitute an insidious threat against Deep Learning (DL) models in-which an adversary aims to steal the architecture, parameters, and hyper-parameters of a targeted DL model.

Adversarial Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.