1 code implementation • 1 Apr 2022 • Hazem Fahmy, Fabrizio Pastore, Lionel Briand, Thomas Stifter
When Deep Neural Networks (DNNs) are used in safety-critical systems, engineers should determine the safety risks associated with failures (i. e., erroneous outputs) observed during testing.
1 code implementation • 1 Apr 2022 • Steve Dias Da Cruz, Bertram Taetz, Thomas Stifter, Didier Stricker
Learning on synthetic data and transferring the resulting properties to their real counterparts is an important challenge for reducing costs and increasing safety in machine learning.
1 code implementation • 1 Apr 2022 • Steve Dias Da Cruz, Bertram Taetz, Thomas Stifter, Didier Stricker
While input images close to known samples will converge to the same or similar attractor, input samples containing unknown features are unstable and converge to different training samples by potentially removing or changing characteristic features.
no code implementations • 7 May 2021 • Steve Dias Da Cruz, Bertram Taetz, Oliver Wasenmüller, Thomas Stifter, Didier Stricker
Common domain shift problem formulations consider the integration of multiple source domains, or the target domain during training.
no code implementations • 11 Dec 2020 • Fitash Ul Haq, Donghwan Shin, Lionel C. Briand, Thomas Stifter, Jun Wang
In this paper, we present an approach to automatically generate test data for KP-DNNs using many-objective search.
no code implementations • 6 Nov 2020 • Steve Dias Da Cruz, Bertram Taetz, Thomas Stifter, Didier Stricker
Our method exploits the availability of identical sceneries under different illumination and environmental conditions for which we formulate a partially impossible reconstruction target: the input image will not convey enough information to reconstruct the target in its entirety.
1 code implementation • 10 Jan 2020 • Steve Dias Da Cruz, Oliver Wasenmüller, Hans-Peter Beise, Thomas Stifter, Didier Stricker
We release SVIRO, a synthetic dataset for sceneries in the passenger compartment of ten different vehicles, in order to analyze machine learning-based approaches for their generalization capacities and reliability when trained on a limited number of variations (e. g. identical backgrounds and textures, few instances per class).