no code implementations • 31 Jan 2023 • Mohammed Oualid Attaoui, Hazem Fahmy, Fabrizio Pastore, Lionel Briand
In our previous work, we proposed a white-box approach (HUDD) and a black-box approach (SAFE) to automatically characterize DNN failures.
1 code implementation • 15 Oct 2022 • Hazem Fahmy, Fabrizio Pastore, Lionel Briand
We present HUDD, a tool that supports safety analysis practices for systems enabled by Deep Neural Networks (DNNs) by automatically identifying the root causes for DNN errors and retraining the DNN.
1 code implementation • 1 Apr 2022 • Hazem Fahmy, Fabrizio Pastore, Lionel Briand, Thomas Stifter
When Deep Neural Networks (DNNs) are used in safety-critical systems, engineers should determine the safety risks associated with failures (i. e., erroneous outputs) observed during testing.
1 code implementation • 13 Jan 2022 • Mohammed Oualid Attaoui, Hazem Fahmy, Fabrizio Pastore, Lionel Briand
Experimental results show the superior ability of SAFE in identifying different root causes of DNN errors based on case studies in the automotive domain.
no code implementations • 13 Jan 2021 • Oscar Cornejo, Fabrizio Pastore, Lionel Briand
On-board embedded software developed for spaceflight systems (space software) must adhere to stringent software quality assurance procedures.
Software Engineering
1 code implementation • 3 Feb 2020 • Hazem Fahmy, Fabrizio Pastore, Mojtaba Bagherzadeh, Lionel Briand
To address these problems in the context of DNNs analyzing images, we propose HUDD, an approach that automatically supports the identification of root causes for DNN errors.