no code implementations • 19 Oct 2023 • Jonathan Crabbé, Pau Rodríguez, Vaishaal Shankar, Luca Zappella, Arno Blaas
In this work, we bridge this gap by probing the representation spaces of 12 robust multimodal models with various backbones (ResNets and ViTs) and pretraining sets (OpenAI, LAION-400M, LAION-2B, YFCC15M, CC12M and DataComp).
1 code implementation • 20 Jul 2023 • Borja Rodríguez-Gálvez, Arno Blaas, Pau Rodríguez, Adam Goliński, Xavier Suau, Jason Ramapuram, Dan Busbridge, Luca Zappella
We consider a different lower bound on the MI consisting of an entropy and a reconstruction term (ER), and analyze the main MVSSL families through its lens.
1 code implementation • 28 Jun 2023 • Xavier Suau, Federico Danieli, T. Anderson Keller, Arno Blaas, Chen Huang, Jason Ramapuram, Dan Busbridge, Luca Zappella
We propose 2D strUctured and EquivarianT representations (coined DUET), which are 2d representations organized in a matrix structure, and equivariant with respect to transformations acting on the input data.
1 code implementation • NeurIPS 2021 • Xingchen Wan, Henry Kenlay, Robin Ru, Arno Blaas, Michael Osborne, Xiaowen Dong
While the majority of the literature focuses on such vulnerability in node-level classification tasks, little effort has been dedicated to analysing adversarial attacks on graph-level classification, an important problem with numerous real-life applications such as biochemistry and social network analysis.
no code implementations • NeurIPS Workshop ICBINB 2021 • Arno Blaas, Xavier Suau, Jason Ramapuram, Nicholas Apostoloff, Luca Zappella
Image augmentations applied during training are crucial for the generalization performance of image classifiers.
1 code implementation • 4 Nov 2021 • Xingchen Wan, Henry Kenlay, Binxin Ru, Arno Blaas, Michael A. Osborne, Xiaowen Dong
While the majority of the literature focuses on such vulnerability in node-level classification tasks, little effort has been dedicated to analysing adversarial attacks on graph-level classification, an important problem with numerous real-life applications such as biochemistry and social network analysis.
no code implementations • ICML Workshop AML 2021 • Xingchen Wan, Henry Kenlay, Binxin Ru, Arno Blaas, Michael Osborne, Xiaowen Dong
Graph neural networks have been shown to be vulnerable to adversarial attacks.
no code implementations • 17 Jun 2021 • Kia Khezeli, Arno Blaas, Frank Soboczenski, Nicholas Chia, John Kalantari
We discuss the role of its eigenvalues in the relationship between the risk and the invariance penalty, and demonstrate that it is ill-conditioned for said counterexamples.
1 code implementation • 7 Apr 2021 • Andrea Patane, Arno Blaas, Luca Laurenti, Luca Cardelli, Stephen Roberts, Marta Kwiatkowska
Gaussian processes (GPs) enable principled computation of model uncertainty, making them attractive for safety-critical applications.
no code implementations • 7 Jan 2021 • Arno Blaas, Stephen J. Roberts
It is desirable, and often a necessity, for machine learning models to be robust against adversarial attacks.
1 code implementation • ICLR 2020 • Binxin Ru, Adam Cobb, Arno Blaas, Yarin Gal
Black-box adversarial attacks require a large number of attempts before finding successful adversarial examples that are visually indistinguishable from the original input.
1 code implementation • 28 May 2019 • Arno Blaas, Andrea Patane, Luca Laurenti, Luca Cardelli, Marta Kwiatkowska, Stephen Roberts
We apply our method to investigate the robustness of GPC models on a 2D synthetic dataset, the SPAM dataset and a subset of the MNIST dataset, providing comparisons of different GPC training techniques, and show how our method can be used for interpretability analysis.