Search Results for author: Thomas Tanay

Found 10 papers, 2 papers with code

FlexHDR: Modelling Alignment and Exposure Uncertainties for Flexible HDR Imaging

no code implementations7 Jan 2022 Sibi Catley-Chandar, Thomas Tanay, Lucas Vandroux, Aleš Leonardis, Gregory Slabaugh, Eduardo Pérez-Pellitero

We introduce a strategy that learns to jointly align and assess the alignment and exposure reliability using an HDR-aware, uncertainty-driven attention map that robustly merges the frames into a single high quality HDR image.

Multiple-Identity Image Attacks Against Face-based Identity Verification

no code implementations20 Jun 2019 Jerone T. A. Andrews, Thomas Tanay, Lewis D. Griffin

New quantitative results are presented that support an explanation in terms of the geometry of the representations spaces used by the verification systems.

Batch Normalization is a Cause of Adversarial Vulnerability

no code implementations6 May 2019 Angus Galloway, Anna Golubeva, Thomas Tanay, Medhat Moussa, Graham W. Taylor

Batch normalization (batch norm) is often used in an attempt to stabilize and accelerate training in deep neural networks.

A New Angle on L2 Regularization

no code implementations28 Jun 2018 Thomas Tanay, Lewis D. Griffin

Imagine two high-dimensional clusters and a hyperplane separating them.

General Classification L2 Regularization

Built-in Vulnerabilities to Imperceptible Adversarial Perturbations

no code implementations19 Jun 2018 Thomas Tanay, Jerone T. A. Andrews, Lewis D. Griffin

Designing models that are robust to small adversarial perturbations of their inputs has proven remarkably difficult.

Adversarial Training Versus Weight Decay

2 code implementations10 Apr 2018 Angus Galloway, Thomas Tanay, Graham W. Taylor

Performance-critical machine learning models should be robust to input perturbations not seen during training.

A Boundary Tilting Persepective on the Phenomenon of Adversarial Examples

no code implementations27 Aug 2016 Thomas Tanay, Lewis Griffin

Deep neural networks have been shown to suffer from a surprising weakness: their classification outputs can be changed by small, non-random perturbations of their inputs.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.