1 code implementation • NeurIPS 2019 • Jonathan Uesato, Jean-Baptiste Alayrac, Po-Sen Huang, Robert Stanforth, Alhussein Fawzi, Pushmeet Kohli
Recent work has uncovered the interesting (and somewhat surprising) finding that training models to be invariant to adversarial perturbations requires substantially larger datasets than those required for standard classification.
2 code implementations • Nature 2022 • Alhussein Fawzi, Matej Balog, Aja Huang, Thomas Hubert, Bernardino Romera-Paredes, Mohammadamin Barekatain, Alexander Novikov, Francisco J. R. Ruiz, Julian Schrittwieser, Grzegorz Swirszcz, David Silver, Demis Hassabis, Pushmeet Kohli
Particularly relevant is the case of 4 × 4 matrices in a finite field, where AlphaTensor’s algorithm improves on Strassen’s two-level algorithm for the first time, to our knowledge, since its discovery 50 years ago2.
3 code implementations • CVPR 2016 • Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Pascal Frossard
State-of-the-art deep neural networks have achieved impressive results on many image classification tasks.
10 code implementations • CVPR 2017 • Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, Pascal Frossard
Given a state-of-the-art deep neural network classifier, we show the existence of a universal (image-agnostic) and very small perturbation vector that causes natural images to be misclassified with high probability.
1 code implementation • CVPR 2019 • Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Jonathan Uesato, Pascal Frossard
State-of-the-art classifiers have been shown to be largely vulnerable to adversarial perturbations.
1 code implementation • ECCV 2018 • Safa Cicek, Alhussein Fawzi, Stefano Soatto
We introduce the SaaS Algorithm for semi-supervised learning, which uses learning speed during stochastic gradient descent in a deep neural network to measure the quality of an iterative estimate of the posterior probability of unknown labels.
no code implementations • NeurIPS 2018 • Alhussein Fawzi, Hamza Fawzi, Omar Fawzi
Despite achieving impressive performance, state-of-the-art classifiers remain highly vulnerable to small, imperceptible, adversarial perturbations.
no code implementations • 22 Feb 2018 • Jean-Yves Franceschi, Alhussein Fawzi, Omar Fawzi
We study the robustness of classifiers to various kinds of random noise models.
no code implementations • ICLR 2018 • Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, Pascal Frossard, Stefano Soatto
Deep networks have recently been shown to be vulnerable to universal perturbations: there exist very small image-agnostic perturbations that cause most natural images to be misclassified by such classifiers.
no code implementations • 26 May 2017 • Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard, Stefano Soatto
The goal of this paper is to analyze the geometric properties of deep neural network classifiers in the input space.
no code implementations • NeurIPS 2016 • Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard
Moreover, we quantify the robustness of classifiers in terms of the subspace dimension in the semi-random noise regime, and show that our bounds remarkably interpolate between the worst-case and random noise regimes.
no code implementations • 9 Feb 2015 • Alhussein Fawzi, Omar Fawzi, Pascal Frossard
To the best of our knowledge, our results provide the first theoretical work that addresses the phenomenon of adversarial instability recently observed for deep networks.
no code implementations • 23 Jul 2015 • Alhussein Fawzi, Pascal Frossard
Invariance to geometric transformations is a highly desirable property of automatic classifiers in many image recognition tasks.
no code implementations • 19 May 2015 • Alhussein Fawzi, Mathieu Sinn, Pascal Frossard
Additive models form a widely popular class of regression models which represent the relation between covariates and response variables as the sum of low-dimensional transfer functions.
no code implementations • 9 Feb 2014 • Alhussein Fawzi, Mike Davies, Pascal Frossard
The dictionary learning problem, which jointly learns the dictionary and linear classifier, is cast as a difference of convex (DC) program and solved efficiently with an iterative DC solver.
no code implementations • 28 Jan 2013 • Alhussein Fawzi, Pascal Frossard
We examine in this paper the problem of image registration from the new perspective where images are given by sparse approximations in parametric dictionaries of geometric functions.
no code implementations • 6 Dec 2018 • Krishnamurthy Dvijotham, Marta Garnelo, Alhussein Fawzi, Pushmeet Kohli
For example, a machine translation model should produce semantically equivalent outputs for innocuous changes in the input to the model.
no code implementations • CVPR 2018 • Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard, Stefano Soatto
We specifically study the topology of classification regions created by deep networks, as well as their associated decision boundary.
no code implementations • NeurIPS 2019 • Alhussein Fawzi, Mateusz Malinowski, Hamza Fawzi, Omar Fawzi
In this work, we introduce a machine learning based method to search for a dynamic proof within these proof systems.
no code implementations • NeurIPS 2019 • Chongli Qin, James Martens, Sven Gowal, Dilip Krishnan, Krishnamurthy Dvijotham, Alhussein Fawzi, Soham De, Robert Stanforth, Pushmeet Kohli
Using this regularizer, we exceed current state of the art and achieve 47% adversarial accuracy for ImageNet with l-infinity adversarial perturbations of radius 4/255 under an untargeted, strong, white-box attack.
no code implementations • 22 Feb 2024 • Francisco J. R. Ruiz, Tuomas Laakkonen, Johannes Bausch, Matej Balog, Mohammadamin Barekatain, Francisco J. H. Heras, Alexander Novikov, Nathan Fitzpatrick, Bernardino Romera-Paredes, John van de Wetering, Alhussein Fawzi, Konstantinos Meichanetzidis, Pushmeet Kohli
A key challenge in realizing fault-tolerant quantum computers is circuit optimization.