Search Results for author: George Adam

Found 3 papers, 0 papers with code

Evaluating Ensemble Robustness Against Adversarial Attacks

no code implementations12 May 2020 George Adam, Romain Speciel

Adversarial examples, which are slightly perturbed inputs generated with the aim of fooling a neural network, are known to transfer between models; adversaries which are effective on one model will often fool another.

Reducing Adversarial Example Transferability Using Gradient Regularization

no code implementations16 Apr 2019 George Adam, Petr Smirnov, Benjamin Haibe-Kains, Anna Goldenberg

We investigate the transferability of adversarial examples between models using the angle between the input-output Jacobians of different models.

Understanding Neural Architecture Search Techniques

no code implementations31 Mar 2019 George Adam, Jonathan Lorraine

This reduction in computation is enabled via weight sharing such as in Efficient Neural Architecture Search (ENAS).

Graph Similarity Neural Architecture Search

Cannot find the paper you are looking for? You can Submit a new open access paper.