no code implementations • 11 Nov 2023 • Zhivar Sourati, Darshan Deshpande, Filip Ilievski, Kiril Gashteovski, Sascha Saralajew
Downstream applications often require text classification models to be accurate, robust, and interpretable.
no code implementations • 25 May 2022 • Sascha Saralajew, Ammar Shaker, Zhao Xu, Kiril Gashteovski, Bhushan Kotnis, Wiem Ben Rim, Jürgen Quittek, Carolin Lawrence
Inspired by the Turing test, we introduce a human-centric assessment framework where a leading domain expert accepts or rejects the solutions of an AI system and another domain expert.
1 code implementation • 25 Apr 2022 • Lukas Ewecker, Lars Ohnemus, Robin Schwager, Stefan Roos, Tim Brühl, Sascha Saralajew
We show that this approach allows for an automated derivation of different object representations, such as binary maps or bounding boxes so that detection models can be trained on different annotation variants and the problem of providently detecting vehicles at night can be tackled from different perspectives.
no code implementations • 23 Jul 2021 • Lukas Ewecker, Ebubekir Asan, Lars Ohnemus, Sascha Saralajew
To demonstrate the usefulness of such an algorithm, the proposed algorithm is deployed in a test vehicle to use the detected light artifacts to control the glare-free high beam system proactively.
1 code implementation • 27 May 2021 • Sascha Saralajew, Lars Ohnemus, Lukas Ewecker, Ebubekir Asan, Simon Isele, Stefan Roos
In this paper, we study the problem of how to map this intuitive human behavior to computer vision algorithms to detect oncoming vehicles at night just from the light reflections they cause by their headlights.
1 code implementation • 31 Dec 2020 • Lars Ohnemus, Lukas Ewecker, Ebubekir Asan, Stefan Roos, Simon Isele, Jakob Ketterer, Leopold Müller, Sascha Saralajew
As humans, we intuitively assume oncoming vehicles before the vehicles are actually physically visible by detecting light reflections caused by their headlamps.
no code implementations • 3 Dec 2020 • Simon T. Isele, Marcel P. Schilling, Fabian E. Klein, Sascha Saralajew, J. Marius Zoellner
RALF provides plausibility labels for radar raw detections, distinguishing between artifacts and targets.
1 code implementation • NeurIPS 2020 • Sascha Saralajew, Lars Holdijk, Thomas Villmann
Current certification methods are computationally expensive and limited to attacks that optimize the manipulation with respect to a norm.
1 code implementation • NeurIPS 2019 • Sascha Saralajew, Lars Holdijk, Maike Rees, Ebubekir Asan, Thomas Villmann
The decomposition of objects into generic components combined with the probabilistic reasoning provides by design a clear interpretation of the classification decision process.
1 code implementation • 1 Feb 2019 • Sascha Saralajew, Lars Holdijk, Maike Rees, Thomas Villmann
The evaluation suggests that both Generalized LVQ and Generalized Tangent LVQ have a high base robustness, on par with the current state-of-the-art in robust neural network methods.
no code implementations • 4 Dec 2018 • Sascha Saralajew, Lars Holdijk, Maike Rees, Thomas Villmann
Neural networks currently dominate the machine learning community and they do so for good reasons.