Search Results for author: Alessandro Farinelli

Found 24 papers, 6 papers with code

Analyzing Adversarial Inputs in Deep Reinforcement Learning

no code implementations7 Feb 2024 Davide Corsi, Guy Amir, Guy Katz, Alessandro Farinelli

In recent years, Deep Reinforcement Learning (DRL) has become a popular paradigm in machine learning due to its successful applications to real-world and complex systems.

reinforcement-learning

Scaling #DNN-Verification Tools with Efficient Bound Propagation and Parallel Computing

no code implementations10 Dec 2023 Luca Marzari, Gabriele Roncolato, Alessandro Farinelli

Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios, ranging from pattern recognition to complex robotic problems.

Enumerating Safe Regions in Deep Neural Networks with Provable Probabilistic Guarantees

no code implementations18 Aug 2023 Luca Marzari, Davide Corsi, Enrico Marchesini, Alessandro Farinelli, Ferdinando Cicalese

Identifying safe areas is a key point to guarantee trust for systems that are based on Deep Neural Networks (DNNs).

Learning Logic Specifications for Soft Policy Guidance in POMCP

1 code implementation16 Mar 2023 Giulio Mazzi, Daniele Meli, Alberto Castellini, Alessandro Farinelli

In this paper, we use inductive logic programming to learn logic specifications from traces of POMCP executions, i. e., sets of belief-action pairs generated by the planner.

Inductive logic programming

Safe Deep Reinforcement Learning by Verifying Task-Level Properties

no code implementations20 Feb 2023 Enrico Marchesini, Luca Marzari, Alessandro Farinelli, Christopher Amato

In this paper, we investigate an alternative approach that uses domain knowledge to quantify the risk in the proximity of such states by defining a violation metric.

reinforcement-learning Reinforcement Learning (RL)

Online Safety Property Collection and Refinement for Safe Deep Reinforcement Learning in Mapless Navigation

no code implementations13 Feb 2023 Luca Marzari, Enrico Marchesini, Alessandro Farinelli

Our evaluation compares the benefits of computing the number of violations using standard hard-coded properties and the ones generated with CROP.

The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural Networks

no code implementations17 Jan 2023 Luca Marzari, Davide Corsi, Ferdinando Cicalese, Alessandro Farinelli

Due to the #P-completeness of the problem, we also propose a randomized, approximate method that provides a provable probabilistic bound of the correct count while significantly reducing computational requirements.

Autonomous Driving Model Selection

Verifying Learning-Based Robotic Navigation Systems

no code implementations26 May 2022 Guy Amir, Davide Corsi, Raz Yerushalmi, Luca Marzari, David Harel, Alessandro Farinelli, Guy Katz

Our work is the first to establish the usefulness of DNN verification in identifying and filtering out suboptimal DRL policies in real-world robots, and we believe that the methods presented here are applicable to a wide range of systems that incorporate deep-learning-based agents.

Model Selection Navigate

An attention model for the formation of collectives in real-world domains

no code implementations30 Apr 2022 Adrià Fenoy, Filippo Bistaffa, Alessandro Farinelli

We consider the problem of forming collectives of agents for real-world applications aligned with Sustainable Development Goals (e. g., shared mobility, cooperative learning).

Curriculum Learning for Safe Mapless Navigation

1 code implementation23 Dec 2021 Luca Marzari, Davide Corsi, Enrico Marchesini, Alessandro Farinelli

To this end, we present a CL approach that leverages Transfer of Learning (ToL) and fine-tuning in a Unity-based simulation with the Robotnik Kairos as a robotic agent.

Unity

Centralizing State-Values in Dueling Networks for Multi-Robot Reinforcement Learning Mapless Navigation

no code implementations16 Dec 2021 Enrico Marchesini, Alessandro Farinelli

We study the problem of multi-robot mapless navigation in the popular Centralized Training and Decentralized Execution (CTDE) paradigm.

reinforcement-learning Reinforcement Learning (RL)

Benchmarking Safe Deep Reinforcement Learning in Aquatic Navigation

no code implementations16 Dec 2021 Enrico Marchesini, Davide Corsi, Alessandro Farinelli

Aquatic navigation is an extremely challenging task due to the non-stationary environment and the uncertainties of the robotic platform, hence it is crucial to consider the safety aspect of the problem, by analyzing the behavior of the trained network to avoid dangerous situations (e. g., collisions).

Benchmarking reinforcement-learning +2

Rule-based Shielding for Partially Observable Monte-Carlo Planning

1 code implementation28 Apr 2021 Giulio Mazzi, Alberto Castellini, Alessandro Farinelli

Results show that the shielded POMCP outperforms the standard POMCP in a case study in which a wrong parameter of POMCP makes it select wrong actions from time to time.

Robot Navigation

Genetic Soft Updates for Policy Evolution in Deep Reinforcement Learning

no code implementations ICLR 2021 Enrico Marchesini, Davide Corsi, Alessandro Farinelli

The combination of Evolutionary Strategies (ES) and Deep Reinforcement Learning (DRL) has been recently proposed to merge the benefits of both solutions.

Continuous Control reinforcement-learning +1

Identification of Unexpected Decisions in Partially Observable Monte-Carlo Planning: a Rule-Based Approach

1 code implementation23 Dec 2020 Giulio Mazzi, Alberto Castellini, Alessandro Farinelli

We propose an iterative process of trace analysis consisting of three main steps, i) the definition of a question by means of a parametric logical formula describing (probabilistic) relationships between beliefs and actions, ii) the generation of an answer by computing the parameters of the logical formula that maximize the number of satisfied clauses (solving a MAX-SMT problem), iii) the analysis of the generated logical formula and the related decision boundaries for identifying unexpected decisions made by POMCP with respect to the original question.

Anomaly Detection Robot Navigation

Evaluating the Safety of Deep Reinforcement Learning Models using Semi-Formal Verification

no code implementations19 Oct 2020 Davide Corsi, Enrico Marchesini, Alessandro Farinelli

In this paper, we present a semi-formal verification approach for decision-making tasks, based on interval analysis, that addresses the computational demanding of previous verification frameworks and design metrics to measure the safety of the models.

Decision Making reinforcement-learning +1

Algorithms for Graph-Constrained Coalition Formation in the Real World

1 code implementation13 Dec 2016 Filippo Bistaffa, Alessandro Farinelli, Jesús Cerquides, Juan A. Rodríguez-Aguilar, Sarvapali D. Ramchurn

In this paper, we focus on a special case of coalition formation known as Graph-Constrained Coalition Formation (GCCF) whereby a network connecting the agents constrains the formation of coalitions.

Worst-case bounds on the quality of max-product fixed-points

no code implementations NeurIPS 2010 Meritxell Vinyals, Jes\'Us Cerquides, Alessandro Farinelli, Juan A. Rodríguez-Aguilar

We study worst-case bounds on the quality of any fixed point assignment of the max-product algorithm for Markov Random Fields (MRF).

Cannot find the paper you are looking for? You can Submit a new open access paper.