Search Results for author: Paolo Tonella

Found 21 papers, 16 papers with code

Reinforcement Learning for Online Testing of Autonomous Driving Systems: a Replication and Extension Study

no code implementations20 Mar 2024 Luca Giamattei, Matteo Biagiola, Roberto Pietrantuono, Stefano Russo, Paolo Tonella

Our extension aims at eliminating some of the possible reasons for the poor performance of RL observed in our replication: (1) the presence of reward components providing contrasting or useless feedback to the RL agent; (2) the usage of an RL algorithm (Q-learning) which requires discretization of an intrinsically continuous state space.

Autonomous Driving Q-Learning +1

Boundary State Generation for Testing and Improvement of Autonomous Driving Systems

no code implementations20 Jul 2023 Matteo Biagiola, Paolo Tonella

State-of-the-art ADS testing approaches modify the controllable attributes of a simulated driving environment until the ADS misbehaves.

Autonomous Driving

Testing of Deep Reinforcement Learning Agents with Surrogate Models

1 code implementation22 May 2023 Matteo Biagiola, Paolo Tonella

The failure prediction acts as a fitness function, guiding the generation towards failure environment configurations, while saving computation time by deferring the execution of the DRL agent in the environment to those configurations that are more likely to expose failures.

Autonomous Vehicles reinforcement-learning

Two is Better Than One: Digital Siblings to Improve Autonomous Driving Testing

1 code implementation14 May 2023 Matteo Biagiola, Andrea Stocco, Vincenzo Riccio, Paolo Tonella

Our empirical evaluation shows that the ensemble failure predictor by the digital siblings is superior to each individual simulator at predicting the failures of the digital twin.

Autonomous Driving

Adopting Two Supervisors for Efficient Use of Large-Scale Remote Deep Neural Networks

1 code implementation5 Apr 2023 Michael Weiss, Paolo Tonella

Systems relying on large-scale DNNs thus have to call the corresponding model over the network, leading to substantial costs for hosting and running the large-scale remote model, costs which are often charged on a per-use basis.

Image Classification Question Answering +2

When and Why Test Generators for Deep Learning Produce Invalid Inputs: an Empirical Study

1 code implementation21 Dec 2022 Vincenzo Riccio, Paolo Tonella

In this paper, we investigate to what extent TIGs can generate valid inputs, according to both automated and human validators.

valid

Uncertainty Quantification for Deep Neural Networks: An Empirical Comparison and Usage Guidelines

no code implementations14 Dec 2022 Michael Weiss, Paolo Tonella

After overviewing the main approaches to uncertainty estimation and discussing their pros and cons, we motivate the need for a specific empirical assessment method that can deal with the experimental setting in which supervisors are used, where the accuracy of the DNN matters only as long as the supervisor lets the DLS continue to operate.

Uncertainty Quantification

Generating and Detecting True Ambiguity: A Forgotten Danger in DNN Supervision Testing

no code implementations21 Jul 2022 Michael Weiss, André García Gómez, Paolo Tonella

In this paper, we propose a novel way to generate ambiguous inputs to test DNN supervisors and used it to empirically compare several existing supervisor techniques.

Image Classification

Simple Techniques Work Surprisingly Well for Neural Network Test Prioritization and Active Learning (Replicability Study)

3 code implementations2 May 2022 Michael Weiss, Paolo Tonella

Test Input Prioritizers (TIP) for Deep Neural Networks (DNN) are an important technique to handle the typically very large test datasets efficiently, saving computation and labeling costs.

Active Learning Uncertainty Quantification

Mind the Gap! A Study on the Transferability of Virtual vs Physical-world Testing of Autonomous Driving Systems

1 code implementation21 Dec 2021 Andrea Stocco, Brian Pulfer, Paolo Tonella

In this paper, we shed light on the problem of generalizing testing results obtained in a driving simulator to a physical platform and provide a characterization and quantification of the sim2real gap affecting SDC testing.

Autonomous Driving Neural Rendering +2

DeepMetis: Augmenting a Deep Learning Test Set to Increase its Mutation Score

1 code implementation15 Sep 2021 Vincenzo Riccio, Nargiz Humbatova, Gunel Jahangirova, Paolo Tonella

The adequacy of the test data used to test such systems can be assessed by their ability to expose artificially injected faults (mutations) that simulate real DL faults.

DeepHyperion: Exploring the Feature Space of Deep Learning-Based Systems through Illumination Search

1 code implementation5 Jul 2021 Tahereh Zohdinasab, Vincenzo Riccio, Alessio Gambi, Paolo Tonella

Deep Learning (DL) has been successfully applied to a wide range of application domains, including safety-critical ones.

A Review and Refinement of Surprise Adequacy

2 code implementations10 Mar 2021 Michael Weiss, Rwiddhi Chakraborty, Paolo Tonella

As an adequacy criterion, it has been used to assess the strength of DL test suites.

Out-of-Distribution Detection

GAssert: A Fully Automated Tool to Improve Assertion Oracles

1 code implementation4 Mar 2021 Valerio Terragni, Gunel Jahangirova, Paolo Tonella, Mauro Pezzè

This demo presents the implementation and usage details of GASSERT, the first tool to automatically improve assertion oracles.

Software Engineering

Fail-Safe Execution of Deep Learning based Systems through Uncertainty Monitoring

2 code implementations1 Feb 2021 Michael Weiss, Paolo Tonella

Modern software systems rely on Deep Neural Networks (DNN) when processing complex, unstructured inputs, such as images, videos, natural language texts or audio signals.

Deep Reinforcement Learning for Black-Box Testing of Android Apps

1 code implementation7 Jan 2021 Andrea Romdhana, Alessio Merlo, Mariano Ceccato, Paolo Tonella

We have developed ARES, a Deep RL approach for black-box testing of Android apps.

Software Engineering

Uncertainty-Wizard: Fast and User-Friendly Neural Network Uncertainty Quantification

1 code implementation29 Dec 2020 Michael Weiss, Paolo Tonella

Uncertainty and confidence have been shown to be useful metrics in a wide variety of techniques proposed for deep learning testing, including test data selection and system supervision. We present uncertainty-wizard, a tool that allows to quantify such uncertainty and confidence in artificial neural networks.

Uncertainty Quantification

Model-based Exploration of the Frontier of Behaviours for Deep Learning System Testing

1 code implementation6 Jul 2020 Vincenzo Riccio, Paolo Tonella

If the frontier of misbehaviours is outside the validity domain of the system, the quality check is passed.

Autonomous Driving valid

Taxonomy of Real Faults in Deep Learning Systems

2 code implementations24 Oct 2019 Nargiz Humbatova, Gunel Jahangirova, Gabriele Bavota, Vincenzo Riccio, Andrea Stocco, Paolo Tonella

The growing application of deep neural networks in safety-critical domains makes the analysis of faults that occur in such systems of enormous importance.

Misbehaviour Prediction for Autonomous Driving Systems

1 code implementation10 Oct 2019 Andrea Stocco, Michael Weiss, Marco Calzana, Paolo Tonella

Deep Neural Networks (DNNs) are the core component of modern autonomous driving systems.

Signal Processing

Assessment of Source Code Obfuscation Techniques

no code implementations7 Apr 2017 Alessio Viticchié, Leonardo Regano, Marco Torchiano, Cataldo Basile, Mariano Ceccato, Paolo Tonella, Roberto Tiella

Obfuscation techniques are a general category of software protections widely adopted to prevent malicious tampering of the code by making applications more difficult to understand and thus harder to modify.

Software Engineering Cryptography and Security

Cannot find the paper you are looking for? You can Submit a new open access paper.