Search Results for author: Arno Blaas

Found 12 papers, 7 papers with code

Robust multimodal models have outlier features and encode more concepts

no code implementations19 Oct 2023 Jonathan Crabbé, Pau Rodríguez, Vaishaal Shankar, Luca Zappella, Arno Blaas

In this work, we bridge this gap by probing the representation spaces of 12 robust multimodal models with various backbones (ResNets and ViTs) and pretraining sets (OpenAI, LAION-400M, LAION-2B, YFCC15M, CC12M and DataComp).

The Role of Entropy and Reconstruction in Multi-View Self-Supervised Learning

1 code implementation20 Jul 2023 Borja Rodríguez-Gálvez, Arno Blaas, Pau Rodríguez, Adam Goliński, Xavier Suau, Jason Ramapuram, Dan Busbridge, Luca Zappella

We consider a different lower bound on the MI consisting of an entropy and a reconstruction term (ER), and analyze the main MVSSL families through its lens.

Self-Supervised Learning

DUET: 2D Structured and Approximately Equivariant Representations

1 code implementation28 Jun 2023 Xavier Suau, Federico Danieli, T. Anderson Keller, Arno Blaas, Chen Huang, Jason Ramapuram, Dan Busbridge, Luca Zappella

We propose 2D strUctured and EquivarianT representations (coined DUET), which are 2d representations organized in a matrix structure, and equivariant with respect to transformations acting on the input data.

Self-Supervised Learning Transfer Learning

Adversarial Attacks on Graph Classifiers via Bayesian Optimisation

1 code implementation NeurIPS 2021 Xingchen Wan, Henry Kenlay, Robin Ru, Arno Blaas, Michael Osborne, Xiaowen Dong

While the majority of the literature focuses on such vulnerability in node-level classification tasks, little effort has been dedicated to analysing adversarial attacks on graph-level classification, an important problem with numerous real-life applications such as biochemistry and social network analysis.

Adversarial Robustness Bayesian Optimisation +1

Challenges of Adversarial Image Augmentations

no code implementations NeurIPS Workshop ICBINB 2021 Arno Blaas, Xavier Suau, Jason Ramapuram, Nicholas Apostoloff, Luca Zappella

Image augmentations applied during training are crucial for the generalization performance of image classifiers.

Adversarial Attacks on Graph Classification via Bayesian Optimisation

1 code implementation4 Nov 2021 Xingchen Wan, Henry Kenlay, Binxin Ru, Arno Blaas, Michael A. Osborne, Xiaowen Dong

While the majority of the literature focuses on such vulnerability in node-level classification tasks, little effort has been dedicated to analysing adversarial attacks on graph-level classification, an important problem with numerous real-life applications such as biochemistry and social network analysis.

Adversarial Robustness Bayesian Optimisation +1

On Invariance Penalties for Risk Minimization

no code implementations17 Jun 2021 Kia Khezeli, Arno Blaas, Frank Soboczenski, Nicholas Chia, John Kalantari

We discuss the role of its eigenvalues in the relationship between the risk and the invariance penalty, and demonstrate that it is ill-conditioned for said counterexamples.

Domain Generalization

Adversarial Robustness Guarantees for Gaussian Processes

1 code implementation7 Apr 2021 Andrea Patane, Arno Blaas, Luca Laurenti, Luca Cardelli, Stephen Roberts, Marta Kwiatkowska

Gaussian processes (GPs) enable principled computation of model uncertainty, making them attractive for safety-critical applications.

Adversarial Robustness Gaussian Processes

BayesOpt Adversarial Attack

1 code implementation ICLR 2020 Binxin Ru, Adam Cobb, Arno Blaas, Yarin Gal

Black-box adversarial attacks require a large number of attempts before finding successful adversarial examples that are visually indistinguishable from the original input.

Adversarial Attack Bayesian Optimisation +2

Adversarial Robustness Guarantees for Classification with Gaussian Processes

1 code implementation28 May 2019 Arno Blaas, Andrea Patane, Luca Laurenti, Luca Cardelli, Marta Kwiatkowska, Stephen Roberts

We apply our method to investigate the robustness of GPC models on a 2D synthetic dataset, the SPAM dataset and a subset of the MNIST dataset, providing comparisons of different GPC training techniques, and show how our method can be used for interpretability analysis.

Adversarial Robustness Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.