1 code implementation • 21 Aug 2018 • Michele Alberti, Vinaychandran Pondenkandath, Marcel Würsch, Manuel Bouillon, Mathias Seuret, Rolf Ingold, Marcus Liwicki
We propose a novel approach towards adversarial attacks on neural networks (NN), focusing on tampering the data used for training instead of generating attacks on trained models.
1 code implementation • 23 Nov 2017 • Michele Alberti, Manuel Bouillon, Rolf Ingold, Marcus Liwicki
This paper presents an open tool for standardizing the evaluation process of the layout analysis task of document images at pixel level.