Search Results for author: Michael Weiss

Found 9 papers, 6 papers with code

Adopting Two Supervisors for Efficient Use of Large-Scale Remote Deep Neural Networks

1 code implementation5 Apr 2023 Michael Weiss, Paolo Tonella

Systems relying on large-scale DNNs thus have to call the corresponding model over the network, leading to substantial costs for hosting and running the large-scale remote model, costs which are often charged on a per-use basis.

Image Classification Question Answering +2

Uncertainty Quantification for Deep Neural Networks: An Empirical Comparison and Usage Guidelines

no code implementations14 Dec 2022 Michael Weiss, Paolo Tonella

After overviewing the main approaches to uncertainty estimation and discussing their pros and cons, we motivate the need for a specific empirical assessment method that can deal with the experimental setting in which supervisors are used, where the accuracy of the DNN matters only as long as the supervisor lets the DLS continue to operate.

Uncertainty Quantification

CheapET-3: Cost-Efficient Use of Remote DNN Models

no code implementations24 Aug 2022 Michael Weiss

On complex problems, state of the art prediction accuracy of Deep Neural Networks (DNN) can be achieved using very large-scale models, consisting of billions of parameters.

Generating and Detecting True Ambiguity: A Forgotten Danger in DNN Supervision Testing

no code implementations21 Jul 2022 Michael Weiss, André García Gómez, Paolo Tonella

In this paper, we propose a novel way to generate ambiguous inputs to test DNN supervisors and used it to empirically compare several existing supervisor techniques.

Image Classification

Simple Techniques Work Surprisingly Well for Neural Network Test Prioritization and Active Learning (Replicability Study)

3 code implementations2 May 2022 Michael Weiss, Paolo Tonella

Test Input Prioritizers (TIP) for Deep Neural Networks (DNN) are an important technique to handle the typically very large test datasets efficiently, saving computation and labeling costs.

Active Learning Uncertainty Quantification

A Review and Refinement of Surprise Adequacy

2 code implementations10 Mar 2021 Michael Weiss, Rwiddhi Chakraborty, Paolo Tonella

As an adequacy criterion, it has been used to assess the strength of DL test suites.

Out-of-Distribution Detection

Fail-Safe Execution of Deep Learning based Systems through Uncertainty Monitoring

2 code implementations1 Feb 2021 Michael Weiss, Paolo Tonella

Modern software systems rely on Deep Neural Networks (DNN) when processing complex, unstructured inputs, such as images, videos, natural language texts or audio signals.

Uncertainty-Wizard: Fast and User-Friendly Neural Network Uncertainty Quantification

1 code implementation29 Dec 2020 Michael Weiss, Paolo Tonella

Uncertainty and confidence have been shown to be useful metrics in a wide variety of techniques proposed for deep learning testing, including test data selection and system supervision. We present uncertainty-wizard, a tool that allows to quantify such uncertainty and confidence in artificial neural networks.

Uncertainty Quantification

Misbehaviour Prediction for Autonomous Driving Systems

1 code implementation10 Oct 2019 Andrea Stocco, Michael Weiss, Marco Calzana, Paolo Tonella

Deep Neural Networks (DNNs) are the core component of modern autonomous driving systems.

Signal Processing

Cannot find the paper you are looking for? You can Submit a new open access paper.