no code implementations • 19 Aug 2023 • Dominik Werner Wolf, Markus Ulrich, Nikhil Kapoor
To address this issue, this paper investigates the domain shift problem further by evaluating the sensitivity of two perception models to different windshield configurations.
no code implementations • 29 Apr 2021 • Sebastian Houben, Stephanie Abrecht, Maram Akila, Andreas Bär, Felix Brockherde, Patrick Feifel, Tim Fingscheidt, Sujan Sai Gannamaneni, Seyed Eghbal Ghobadi, Ahmed Hammam, Anselm Haselhoff, Felix Hauser, Christian Heinzemann, Marco Hoffmann, Nikhil Kapoor, Falk Kappel, Marvin Klingner, Jan Kronenberger, Fabian Küppers, Jonas Löhdefink, Michael Mlynarski, Michael Mock, Firas Mualla, Svetlana Pavlitskaya, Maximilian Poretschkin, Alexander Pohl, Varun Ravi-Kumar, Julia Rosenzweig, Matthias Rottmann, Stefan Rüping, Timo Sämann, Jan David Schneider, Elena Schulz, Gesina Schwalbe, Joachim Sicking, Toshika Srivastava, Serin Varghese, Michael Weber, Sebastian Wirkert, Tim Wirtz, Matthias Woehrle
Our paper addresses both machine learning experts and safety engineers: The former ones might profit from the broad range of machine learning topics covered and discussions on limitations of recent methods.
no code implementations • 11 Jan 2021 • Andreas Bär, Jonas Löhdefink, Nikhil Kapoor, Serin J. Varghese, Fabian Hüger, Peter Schlicht, Tim Fingscheidt
Although CNNs obtain state-of-the-art performance on clean images, almost imperceptible changes to the input, referred to as adversarial perturbations, may lead to fatal deception.
no code implementations • 2 Dec 2020 • Nikhil Kapoor, Chun Yuan, Jonas Löhdefink, Roland Zimmermann, Serin Varghese, Fabian Hüger, Nico Schmidt, Peter Schlicht, Tim Fingscheidt
Deep neural networks are often not robust to semantically-irrelevant changes in the input.
no code implementations • 2 Dec 2020 • Nikhil Kapoor, Andreas Bär, Serin Varghese, Jan David Schneider, Fabian Hüger, Peter Schlicht, Tim Fingscheidt
Despite recent advancements, deep neural networks are not robust against adversarial perturbations.
no code implementations • 9 Nov 2020 • Paul Schwerdtner, Florens Greßner, Nikhil Kapoor, Felix Assion, René Sass, Wiebke Günther, Fabian Hüger, Peter Schlicht
In this paper we propose a framework for assessing the risk associated with deploying a machine learning model in a specified environment.