Search Results for author: Richard Kenway

Found 2 papers, 0 papers with code

Protection against Cloning for Deep Learning

no code implementations29 Mar 2018 Richard Kenway

The susceptibility of deep learning to adversarial attack can be understood in the framework of the Renormalisation Group (RG) and the vulnerability of a specific network may be diagnosed provided the weights in each layer are known.

Adversarial Attack

Vulnerability of Deep Learning

no code implementations16 Mar 2018 Richard Kenway

The Renormalisation Group (RG) provides a framework in which it is possible to assess whether a deep-learning network is sensitive to small changes in the input data and hence prone to error, or susceptible to adversarial attack.

Adversarial Attack General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.