Search Results for author: Moe Matsuki

Found 3 papers, 1 papers with code

Representation Quality Explain Adversarial Attacks

no code implementations25 Sep 2019 Danilo Vasconcellos Vargas, Shashank Kotyan, Moe Matsuki

The main idea lies in the fact that some features are present on unknown classes and that unknown classes can be defined as a combination of previous learned features without representation bias (a bias towards representation that maps only current set of input-outputs and their boundary).

Representation Quality Of Neural Networks Links To Adversarial Attacks and Defences

1 code implementation15 Jun 2019 Shashank Kotyan, Danilo Vasconcellos Vargas, Moe Matsuki

A crucial step to understanding the rationale for this lack of robustness is to assess the potential of the neural networks' representation to encode the existing features.

Clustering Zero-Shot Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.