Search Results for author: Martin Gubri

Found 7 papers, 6 papers with code

Calibrating Large Language Models Using Their Generations Only

1 code implementation9 Mar 2024 Dennis Ulmer, Martin Gubri, Hwaran Lee, Sangdoo Yun, Seong Joon Oh

As large language models (LLMs) are increasingly deployed in user-facing applications, building trust and maintaining safety by accurately quantifying a model's confidence in its prediction becomes even more important.

Question Answering Text Generation

TRAP: Targeted Random Adversarial Prompt Honeypot for Black-Box Identification

1 code implementation20 Feb 2024 Martin Gubri, Dennis Ulmer, Hwaran Lee, Sangdoo Yun, Seong Joon Oh

Large Language Model (LLM) services and models often come with legal rules on who can use them and how they must use them.

Language Modelling Large Language Model

Going Further: Flatness at the Rescue of Early Stopping for Adversarial Example Transferability

1 code implementation5 Apr 2023 Martin Gubri, Maxime Cordy, Yves Le Traon

A common hypothesis to explain this is that deep neural networks (DNNs) first learn robust features, which are more generic, thus a better surrogate.

LGV: Boosting Adversarial Example Transferability from Large Geometric Vicinity

1 code implementation26 Jul 2022 Martin Gubri, Maxime Cordy, Mike Papadakis, Yves Le Traon, Koushik Sen

We propose transferability from Large Geometric Vicinity (LGV), a new technique to increase the transferability of black-box adversarial attacks.

Adversarial Attack

Influence-Driven Data Poisoning in Graph-Based Semi-Supervised Classifiers

no code implementations14 Dec 2020 Adriano Franci, Maxime Cordy, Martin Gubri, Mike Papadakis, Yves Le Traon

Graph-based Semi-Supervised Learning (GSSL) is a practical solution to learn from a limited amount of labelled data together with a vast amount of unlabelled data.

Data Poisoning

Efficient and Transferable Adversarial Examples from Bayesian Neural Networks

1 code implementation10 Nov 2020 Martin Gubri, Maxime Cordy, Mike Papadakis, Yves Le Traon, Koushik Sen

An established way to improve the transferability of black-box evasion attacks is to craft the adversarial examples on an ensemble-based surrogate to increase diversity.

Adversarial Attack Bayesian Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.