1 code implementation • NeurIPS 2020 • Sascha Saralajew, Lars Holdijk, Thomas Villmann
Current certification methods are computationally expensive and limited to attacks that optimize the manipulation with respect to a norm.
1 code implementation • NeurIPS 2019 • Sascha Saralajew, Lars Holdijk, Maike Rees, Ebubekir Asan, Thomas Villmann
The decomposition of objects into generic components combined with the probabilistic reasoning provides by design a clear interpretation of the classification decision process.
1 code implementation • 1 Feb 2019 • Sascha Saralajew, Lars Holdijk, Maike Rees, Thomas Villmann
The evaluation suggests that both Generalized LVQ and Generalized Tangent LVQ have a high base robustness, on par with the current state-of-the-art in robust neural network methods.
no code implementations • 17 Jan 2019 • Thomas Villmann, John Ravichandran, Andrea Villmann, David Nebel, Marika Kaden
An appropriate choice of the activation function (like ReLU, sigmoid or swish) plays an important role in the performance of (deep) multilayer perceptrons (MLP) for classification and regression learning.
no code implementations • 4 Dec 2018 • Sascha Saralajew, Lars Holdijk, Maike Rees, Thomas Villmann
Neural networks currently dominate the machine learning community and they do so for good reasons.
no code implementations • 18 Oct 2013 • Martin Riedel, Marika Kästner, Fabrice Rossi, Thomas Villmann
We propose in this contribution a method for l one regularization in prototype based relevance learning vector quantization (LVQ) for sparse relevance profiles.