Search Results for author: Alhussein Fawzi

Found 21 papers, 6 papers with code

Are Labels Required for Improving Adversarial Robustness?

1 code implementation NeurIPS 2019 Jonathan Uesato, Jean-Baptiste Alayrac, Po-Sen Huang, Robert Stanforth, Alhussein Fawzi, Pushmeet Kohli

Recent work has uncovered the interesting (and somewhat surprising) finding that training models to be invariant to adversarial perturbations requires substantially larger datasets than those required for standard classification.

4k Adversarial Robustness

Discovering faster matrix multiplication algorithms with reinforcement learning

2 code implementations Nature 2022 Alhussein Fawzi, Matej Balog, Aja Huang, Thomas Hubert, Bernardino Romera-Paredes, Mohammadamin Barekatain, Alexander Novikov, Francisco J. R. Ruiz, Julian Schrittwieser, Grzegorz Swirszcz, David Silver, Demis Hassabis, Pushmeet Kohli

Particularly relevant is the case of 4 × 4 matrices in a finite field, where AlphaTensor’s algorithm improves on Strassen’s two-level algorithm for the first time, to our knowledge, since its discovery 50 years ago2.

reinforcement-learning Reinforcement Learning (RL)

Universal adversarial perturbations

10 code implementations CVPR 2017 Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, Pascal Frossard

Given a state-of-the-art deep neural network classifier, we show the existence of a universal (image-agnostic) and very small perturbation vector that causes natural images to be misclassified with high probability.

SaaS: Speed as a Supervisor for Semi-supervised Learning

1 code implementation ECCV 2018 Safa Cicek, Alhussein Fawzi, Stefano Soatto

We introduce the SaaS Algorithm for semi-supervised learning, which uses learning speed during stochastic gradient descent in a deep neural network to measure the quality of an iterative estimate of the posterior probability of unknown labels.

Adversarial vulnerability for any classifier

no code implementations NeurIPS 2018 Alhussein Fawzi, Hamza Fawzi, Omar Fawzi

Despite achieving impressive performance, state-of-the-art classifiers remain highly vulnerable to small, imperceptible, adversarial perturbations.

General Classification

Robustness of classifiers to uniform $\ell\_p$ and Gaussian noise

no code implementations22 Feb 2018 Jean-Yves Franceschi, Alhussein Fawzi, Omar Fawzi

We study the robustness of classifiers to various kinds of random noise models.

Robustness of classifiers to universal perturbations: a geometric perspective

no code implementations ICLR 2018 Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, Pascal Frossard, Stefano Soatto

Deep networks have recently been shown to be vulnerable to universal perturbations: there exist very small image-agnostic perturbations that cause most natural images to be misclassified by such classifiers.

Classification regions of deep neural networks

no code implementations26 May 2017 Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard, Stefano Soatto

The goal of this paper is to analyze the geometric properties of deep neural network classifiers in the input space.

Classification General Classification

Robustness of classifiers: from adversarial to random noise

no code implementations NeurIPS 2016 Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard

Moreover, we quantify the robustness of classifiers in terms of the subspace dimension in the semi-random noise regime, and show that our bounds remarkably interpolate between the worst-case and random noise regimes.

Analysis of classifiers' robustness to adversarial perturbations

no code implementations9 Feb 2015 Alhussein Fawzi, Omar Fawzi, Pascal Frossard

To the best of our knowledge, our results provide the first theoretical work that addresses the phenomenon of adversarial instability recently observed for deep networks.

General Classification

Manitest: Are classifiers really invariant?

no code implementations23 Jul 2015 Alhussein Fawzi, Pascal Frossard

Invariance to geometric transformations is a highly desirable property of automatic classifiers in many image recognition tasks.

Data Augmentation

Multi-task additive models with shared transfer functions based on dictionary learning

no code implementations19 May 2015 Alhussein Fawzi, Mathieu Sinn, Pascal Frossard

Additive models form a widely popular class of regression models which represent the relation between covariates and response variables as the sum of low-dimensional transfer functions.

Additive models Dictionary Learning +2

Dictionary learning for fast classification based on soft-thresholding

no code implementations9 Feb 2014 Alhussein Fawzi, Mike Davies, Pascal Frossard

The dictionary learning problem, which jointly learns the dictionary and linear classifier, is cast as a difference of convex (DC) program and solved efficiently with an iterative DC solver.

Classification Dictionary Learning +1

Image registration with sparse approximations in parametric dictionaries

no code implementations28 Jan 2013 Alhussein Fawzi, Pascal Frossard

We examine in this paper the problem of image registration from the new perspective where images are given by sparse approximations in parametric dictionaries of geometric functions.

Image Registration

Verification of deep probabilistic models

no code implementations6 Dec 2018 Krishnamurthy Dvijotham, Marta Garnelo, Alhussein Fawzi, Pushmeet Kohli

For example, a machine translation model should produce semantically equivalent outputs for innocuous changes in the input to the model.

Machine Translation Translation

Empirical Study of the Topology and Geometry of Deep Networks

no code implementations CVPR 2018 Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard, Stefano Soatto

We specifically study the topology of classification regions created by deep networks, as well as their associated decision boundary.

General Classification

Learning dynamic polynomial proofs

no code implementations NeurIPS 2019 Alhussein Fawzi, Mateusz Malinowski, Hamza Fawzi, Omar Fawzi

In this work, we introduce a machine learning based method to search for a dynamic proof within these proof systems.

BIG-bench Machine Learning Inductive Bias

Adversarial Robustness through Local Linearization

no code implementations NeurIPS 2019 Chongli Qin, James Martens, Sven Gowal, Dilip Krishnan, Krishnamurthy Dvijotham, Alhussein Fawzi, Soham De, Robert Stanforth, Pushmeet Kohli

Using this regularizer, we exceed current state of the art and achieve 47% adversarial accuracy for ImageNet with l-infinity adversarial perturbations of radius 4/255 under an untargeted, strong, white-box attack.

Adversarial Defense Adversarial Robustness

Cannot find the paper you are looking for? You can Submit a new open access paper.