no code implementations • 15 Jul 2024 • Ondrej Bajgar, Alessandro Abate, Konstantinos Gatsis, Michael A. Osborne

The goal of Bayesian inverse reinforcement learning (IRL) is recovering a posterior distribution over reward functions using a set of demonstrations from an expert optimizing for a reward unknown to the learner.

no code implementations • 22 Jun 2024 • Lukas Fluri, Leon Lang, Alessandro Abate, Patrick Forré, David Krueger, Joar Skalse

We say that such a reward model has an error-regret mismatch.

1 code implementation • 14 Jun 2024 • Luckeciano C. Melo, Panagiotis Tigas, Alessandro Abate, Yarin Gal

We address this by proposing the Bayesian Active Learner for Preference Modeling (BAL-PM), a novel stochastic acquisition policy that not only targets points of high epistemic uncertainty according to the preference model but also seeks to maximize the entropy of the acquired prompt distribution in the feature space spanned by the employed LLM.

1 code implementation • 24 May 2024 • Alessandro Abate, Mirco Giacobbe, Yannik Schnitzer

We introduce a data-driven approach to computing finite bisimulations for state transition systems with very large, possibly infinite state space.

no code implementations • 14 May 2024 • Adrien Banse, Licio Romao, Alessandro Abate, Raphaël M. Jungers

Abstractions of dynamical systems enable their verification and the design of feedback controllers using simpler, usually discrete, models.

no code implementations • 10 May 2024 • David "davidad" Dalrymple, Joar Skalse, Yoshua Bengio, Stuart Russell, Max Tegmark, Sanjit Seshia, Steve Omohundro, Christian Szegedy, Ben Goldhaber, Nora Ammann, Alessandro Abate, Joe Halpern, Clark Barrett, Ding Zhao, Tan Zhi-Xuan, Jeannette Wing, Joshua Tenenbaum

Ensuring that AI systems reliably and robustly avoid harmful or dangerous behaviours is a crucial challenge, especially for AI systems with a high degree of autonomy and general intelligence, or systems used in safety-critical contexts.

no code implementations • 29 Apr 2024 • Alessandro Abate, Sergiy Bogomolov, Alec Edwards, Kostiantyn Potomkin, Sadegh Soudjani, Paolo Zuliani

We present a novel technique for online safety verification of autonomous systems, which performs reachability analysis efficiently for both bounded and unbounded horizons by employing neural barrier certificates.

no code implementations • 12 Apr 2024 • Rudi Coppola, Andrea Peruffo, Licio Romao, Alessandro Abate, Manuel Mazo Jr

The abstraction of dynamical systems is a powerful tool that enables the design of feedback controllers using a correct-by-design framework.

no code implementations • 2 Apr 2024 • Thom Badings, Licio Romao, Alessandro Abate, Nils Jansen

To address this issue, we propose a novel abstraction scheme for stochastic linear systems that exploits the system's stability to obtain significantly smaller abstract models.

no code implementations • 11 Mar 2024 • Joar Skalse, Alessandro Abate

In addition to this, we also characterise the conditions under which a behavioural model is robust to small perturbations of the observed policy, and we analyse how robust many behavioural models are to misspecification of their parameter values (such as e. g.\ the discount rate).

1 code implementation • 29 Jan 2024 • Alexandros E. Tzikas, Licio Romao, Mert Pilanci, Alessandro Abate, Mykel J. Kochenderfer

Many machine learning applications require operating on a spatially distributed dataset.

no code implementations • 26 Jan 2024 • Joar Skalse, Alessandro Abate

Moreover, we find that scalar, Markovian rewards are unable to express most of the instances in each of these three classes.

no code implementations • 18 Dec 2023 • Rohan Mitta, Hosein Hasanbeig, Jun Wang, Daniel Kroening, Yiannis Kantaros, Alessandro Abate

This paper addresses the problem of maintaining safety during training in Reinforcement Learning (RL), such that the safety constraint violations are bounded at any point during learning.

no code implementations • 11 Dec 2023 • Luke Rickard, Alessandro Abate, Kostas Margellos

Synthesising verifiably correct controllers for dynamical systems is crucial for safety-critical problems.

no code implementations • 16 Nov 2023 • Thom Badings, Nils Jansen, Licio Romao, Alessandro Abate

Such autonomous systems are naturally modeled as stochastic dynamical models.

no code implementations • 16 Nov 2023 • Alec Edwards, Andrea Peruffo, Alessandro Abate

This paper presents Fossil 2. 0, a new major release of a software tool for the synthesis of certificates (e. g., Lyapunov and barrier functions) for dynamical systems modelled as ordinary differential and difference equations.

1 code implementation • 3 Oct 2023 • Matthew Wicker, Luca Laurenti, Andrea Patane, Nicola Paoletti, Alessandro Abate, Marta Kwiatkowska

Such computed lower bounds provide safety certification for the given policy and BNN model.

no code implementations • 26 Sep 2023 • Joar Skalse, Lucy Farnik, Sumeet Ramesh Motwani, Erik Jenner, Adam Gleave, Alessandro Abate

This means that reward learning algorithms generally must be evaluated empirically, which is expensive, and that their failure modes are difficult to anticipate in advance.

no code implementations • 12 Sep 2023 • Alec Edwards, Andrea Peruffo, Alessandro Abate

An emerging branch of control theory specialises in certificate learning, concerning the specification of a desired (possibly complex) system behaviour for an autonomous or control model, which is then analytically verified by means of a function-based proof.

no code implementations • 28 Jul 2023 • Alec Edwards, Mirco Giacobbe, Alessandro Abate

Neural abstractions have been recently introduced as formal approximations of complex, nonlinear dynamical models.

no code implementations • 11 Jul 2023 • James Fox, Matt MacDermott, Lewis Hammond, Paul Harrenstein, Alessandro Abate, Michael Wooldridge

Multi-agent influence diagrams (MAIDs) are a popular game-theoretic model based on Bayesian networks.

no code implementations • 29 Jun 2023 • Karan Mukhi, Alessandro Abate

The flexibility of an individual EV can be quantified as a convex polytope and the flexibility of a population of EVs is the Minkowski sum of these polytopes.

no code implementations • 5 Jun 2023 • Patrick Benjamin, Alessandro Abate

We introduce networked communication to the mean-field game framework, in particular to oracle-free settings where $N$ decentralised agents learn along a single, non-episodic run of the empirical system.

no code implementations • 12 Apr 2023 • Maico Hendrikus Wilhelmus Engelaar, Licio Romao, Yulong Gao, Mircea Lazar, Alessandro Abate, Sofie Haesaert

In this paper, we propose a new model reduction technique for linear stochastic systems that builds upon knowledge filtering and utilizes optimal Kalman filtering techniques.

no code implementations • 10 Apr 2023 • Frederik Baymler Mathiesen, Licio Romao, Simeon C. Calvert, Alessandro Abate, Luca Laurenti

In particular, we show that the stochastic program to synthesize a SBF can be relaxed into a chance-constrained optimisation problem on which scenario approach theory applies.

no code implementations • 2 Apr 2023 • Licio Romao, Ashish R. Hota, Alessandro Abate

We present a novel distributionally robust framework for dynamic programming that uses kernel methods to design feedback control policies.

no code implementations • 30 Mar 2023 • Adrien Banse, Licio Romao, Alessandro Abate, Raphaël M. Jungers

In order to learn the optimal structure, we define a Kantorovich-inspired metric between Markov chains, and we use it as a loss function.

no code implementations • 23 Mar 2023 • Zifan Wang, Yulong Gao, Siyi Wang, Michael M. Zavlanos, Alessandro Abate, Karl H. Johansson

Distributional reinforcement learning (DRL) enhances the understanding of the effects of the randomness in the environment by letting agents learn the distribution of a random return, rather than its expected value as in standard RL.

1 code implementation • 27 Jan 2023 • Alessandro Abate, Alec Edwards, Mirco Giacobbe

We present a novel method for the safety verification of nonlinear dynamical models that uses neural networks to represent abstractions of their dynamics.

no code implementations • 5 Jan 2023 • Lewis Hammond, James Fox, Tom Everitt, Ryan Carey, Alessandro Abate, Michael Wooldridge

Regarding question iii), we describe correspondences between causal games and other formalisms, and explain how causal games can be used to answer queries that other causal or game-theoretic models do not support.

1 code implementation • 4 Jan 2023 • Thom Badings, Licio Romao, Alessandro Abate, David Parker, Hasan A. Poonawala, Marielle Stoelinga, Nils Jansen

This iMDP is, with a user-specified confidence probability, robust against uncertainty in the transition probabilities, and the tightness of the probability intervals can be controlled through the number of samples.

1 code implementation • 28 Dec 2022 • Joar Skalse, Lewis Hammond, Charlie Griffin, Alessandro Abate

In this work we introduce reinforcement learning techniques for solving lexicographic multi-objective problems.

Multi-Objective Reinforcement Learning reinforcement-learning

no code implementations • 6 Dec 2022 • Joar Skalse, Alessandro Abate

In this paper, we provide a mathematical analysis of how robust different IRL models are to misspecification, and answer precisely how the demonstrator policy may differ from each of the standard models before that model leads to faulty inferences about the reward function $R$.

no code implementations • 4 Dec 2022 • Adrien Banse, Licio Romao, Alessandro Abate, Raphaël M. Jungers

We propose a sample-based, sequential method to abstract a (potentially black-box) dynamical system with a sequence of memory-dependent Markov chains of increasing size.

no code implementations • 1 Dec 2022 • Luke Rickard, Thom Badings, Licio Romao, Alessandro Abate

We consider the cases where the transition probabilities of this MDP are either known up to an interval or completely unknown.

1 code implementation • 12 Oct 2022 • Thom Badings, Licio Romao, Alessandro Abate, Nils Jansen

Stochastic noise causes aleatoric uncertainty, whereas imprecise knowledge of model parameters leads to epistemic uncertainty.

no code implementations • 30 Sep 2022 • Daniel Jarne Ornia, Licio Romao, Lewis Hammond, Manuel Mazo Jr., Alessandro Abate

Policy robustness in Reinforcement Learning may not be desirable at any cost: the alterations caused by robustness requirements from otherwise optimal policies should be explainable, quantifiable and formally verifiable.

1 code implementation • 21 Sep 2022 • Hosein Hasanbeig, Daniel Kroening, Alessandro Abate

LCRL is a software tool that implements model-free Reinforcement Learning (RL) algorithms over unknown Markov Decision Processes (MDPs), synthesising policies that satisfy a given linear temporal specification with maximal probability.

no code implementations • 25 Aug 2022 • Alessandro Abate, Yousif Almulla, James Fox, David Hyland, Michael Wooldridge

Second, we propose a novel method for distilling the task automaton (assumed to be a deterministic finite automaton) from the learnt product MDP.

no code implementations • 12 Aug 2022 • Scott R. Jeen, Alessandro Abate, Jonathan M. Cullen

Heating and cooling systems in buildings account for 31\% of global energy use, much of which are regulated by Rule Based Controllers (RBCs) that neither maximise energy efficiency nor minimise emissions by interacting optimally with the grid.

1 code implementation • 28 Jun 2022 • Scott R. Jeen, Alessandro Abate, Jonathan M. Cullen

Heating and cooling systems in buildings account for 31% of global energy use, much of which are regulated by Rule Based Controllers (RBCs) that neither maximise energy efficiency nor minimise emissions by interacting optimally with the grid.

no code implementations • 14 Mar 2022 • Joar Skalse, Matthew Farrugia-Roberts, Stuart Russell, Alessandro Abate, Adam Gleave

It is often very challenging to manually design reward functions for complex, real-world tasks.

no code implementations • 25 Oct 2021 • Thom S. Badings, Alessandro Abate, Nils Jansen, David Parker, Hasan A. Poonawala, Marielle Stoelinga

We use state-of-the-art verification techniques to provide guarantees on the iMDP, and compute a controller for which these guarantees carry over to the autonomous system.

1 code implementation • 21 May 2021 • Matthew Wicker, Luca Laurenti, Andrea Patane, Nicola Paoletti, Alessandro Abate, Marta Kwiatkowska

We consider the problem of computing reach-avoid probabilities for iterative predictions made with Bayesian neural network (BNN) models.

1 code implementation • 24 Feb 2021 • Mingyu Cai, Mohammadhosein Hasanbeig, Shaoping Xiao, Alessandro Abate, Zhen Kan

This paper investigates the motion planning of autonomous dynamical systems modeled by Markov decision processes (MDP) with unknown transition probabilities over continuous state and action spaces.

1 code implementation • 9 Feb 2021 • Lewis Hammond, James Fox, Tom Everitt, Alessandro Abate, Michael Wooldridge

Multi-agent influence diagrams (MAIDs) are a popular form of graphical model that, for certain classes of games, have been shown to offer key complexity and explainability advantages over traditional extensive form game (EFG) representations.

1 code implementation • 1 Feb 2021 • Lewis Hammond, Alessandro Abate, Julian Gutierrez, Michael Wooldridge

In this paper, we study the problem of learning to satisfy temporal logic specifications with a group of agents in an unknown environment, which may exhibit probabilistic behaviour.

Multi-agent Reinforcement Learning
reinforcement-learning
**+1**

1 code implementation • 7 Aug 2020 • Kyriakos Polymenakos, Nikitas Rontsis, Alessandro Abate, Stephen Roberts

SafePILCO is a software tool for safe and data-efficient policy search with reinforcement learning.

no code implementations • 21 Jul 2020 • Daniele Ahmed, Andrea Peruffo, Alessandro Abate

In this paper we employ SMT solvers to soundly synthesise Lyapunov functions that assert the stability of a given dynamical model.

no code implementations • 7 Jul 2020 • Andrea Peruffo, Daniele Ahmed, Alessandro Abate

We introduce an automated, formal, counterexample-based approach to synthesise Barrier Certificates (BC) for the safety verification of continuous and hybrid dynamical models.

no code implementations • 6 Jul 2020 • Thomas J. Ringstrom, Mohammadhosein Hasanbeig, Alessandro Abate

In Hierarchical Control, compositionality, abstraction, and task-transfer are crucial for designing versatile algorithms which can solve a variety of problems with maximal representational reuse.

1 code implementation • NeurIPS 2020 • Francesco Cosentino, Harald Oberhauser, Alessandro Abate

Given a discrete probability measure supported on $N$ atoms and a set of $n$ real-valued functions, there exists a probability measure that is supported on a subset of $n+1$ of the original $N$ atoms and has the same mean when integrated against each of the $n$ functions.

1 code implementation • 2 Jun 2020 • Francesco Cosentino, Harald Oberhauser, Alessandro Abate

Various flavours of Stochastic Gradient Descent (SGD) replace the expensive summation that computes the full gradient by approximating it with a small sum over a randomly selected subsample of the data set that in turn suffers from a high variance.

no code implementations • 19 Mar 2020 • Alessandro Abate, Daniele Ahmed, Mirco Giacobbe, Andrea Peruffo

We employ a counterexample-guided approach where a numerical learner and a symbolic verifier interact to construct provably correct Lyapunov neural networks (LNNs).

no code implementations • 26 Feb 2020 • Mohammadhosein Hasanbeig, Alessandro Abate, Daniel Kroening

This paper presents the concept of an adaptive safe padding that forces Reinforcement Learning (RL) to synthesise optimal control policies while ensuring safety during the learning process.

no code implementations • 29 Nov 2019 • Kyriakos Polymenakos, Luca Laurenti, Andrea Patane, Jan-Peter Calliess, Luca Cardelli, Marta Kwiatkowska, Alessandro Abate, Stephen Roberts

Gaussian Processes (GPs) are widely employed in control and learning because of their principled treatment of uncertainty.

1 code implementation • 22 Nov 2019 • Mohammadhosein Hasanbeig, Natasha Yogananda Jeppu, Alessandro Abate, Tom Melham, Daniel Kroening

This paper proposes DeepSynth, a method for effective training of deep Reinforcement Learning (RL) agents when the reward is sparse and non-Markovian, but at the same time progress towards the reward requires achieving an unknown sequence of high-level objectives.

2 code implementations • 23 Sep 2019 • Lim Zun Yuan, Mohammadhosein Hasanbeig, Alessandro Abate, Daniel Kroening

We propose an actor-critic, model-free, and online Reinforcement Learning (RL) framework for continuous-state continuous-action Markov Decision Processes (MDPs) when the reward is highly sparse but encompasses a high-level temporal structure.

1 code implementation • 11 Sep 2019 • Mohammadhosein Hasanbeig, Yiannis Kantaros, Alessandro Abate, Daniel Kroening, George J. Pappas, Insup Lee

Reinforcement Learning (RL) has emerged as an efficient method of choice for solving complex sequential decision making problems in automatic control, computer science, economics, and biology.

1 code implementation • 2 Feb 2019 • Hosein Hasanbeig, Daniel Kroening, Alessandro Abate

Reinforcement Learning (RL) is a widely employed machine learning architecture that has been applied to a variety of control problems.

no code implementations • 20 Sep 2018 • Mohammadhosein Hasanbeig, Alessandro Abate, Daniel Kroening

We propose a method for efficient training of Q-functions for continuous-state Markov Decision Processes (MDPs) such that the traces of the resulting policies satisfy a given Linear Temporal Logic (LTL) property.

1 code implementation • 24 Jan 2018 • Mohammadhosein Hasanbeig, Alessandro Abate, Daniel Kroening

With this reward function, the policy synthesis procedure is "constrained" by the given specification.

1 code implementation • 15 Dec 2017 • Kyriakos Polymenakos, Alessandro Abate, Stephen Roberts

We propose a method to optimise the parameters of a policy which will be used to safely perform a given task in a data-efficient manner.

no code implementations • 5 Jul 2017 • Elizabeth Polgreen, Viraj Wijesuriya, Sofie Haesaert, Alessandro Abate

We present a new method for statistical verification of quantitative properties over a partially unknown system with actions, utilising a parameterised model (in this work, a parametric Markov decision process) and data collected from experiments performed on the underlying system.

no code implementations • 1 Sep 2014 • Sofie Haesaert, Robert Babuska, Alessandro Abate

This article deals with stochastic processes endowed with the Markov (memoryless) property and evolving over general (uncountable) state spaces.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.