Search Results for author: Sebastian Junges

Found 21 papers, 7 papers with code

Approximate Dec-POMDP Solving Using Multi-Agent A*

no code implementations9 May 2024 Wietze Koops, Sebastian Junges, Nils Jansen

Our experiments demonstrate the efficacy and scalability of the approach.

Imprecise Probabilities Meet Partial Observability: Game Semantics for Robust POMDPs

no code implementations8 May 2024 Eline M. Bovy, Marnix Suilen, Sebastian Junges, Nils Jansen

Partially observable Markov decision processes (POMDPs) rely on the key assumption that probability distributions are precisely known.

Factored Online Planning in Many-Agent POMDPs

no code implementations18 Dec 2023 Maris F. L. Galesloot, Thiago D. Simão, Sebastian Junges, Nils Jansen

However, the challenges of value estimation and belief estimation have only been tackled individually, which prevents existing methods from scaling to settings with many agents.

Efficient Sensitivity Analysis for Parametric Robust Markov Chains

no code implementations1 May 2023 Thom Badings, Sebastian Junges, Ahmadreza Marandi, Ufuk Topcu, Nils Jansen

As our main contribution, we present an efficient method to compute these partial derivatives.

COOL-MC: A Comprehensive Tool for Reinforcement Learning and Model Checking

3 code implementations15 Sep 2022 Dennis Gross, Nils Jansen, Sebastian Junges, Guillermo A. Perez

This paper presents COOL-MC, a tool that integrates state-of-the-art reinforcement learning (RL) and model checking.

OpenAI Gym reinforcement-learning +1

Abstraction-Refinement for Hierarchical Probabilistic Models

no code implementations6 Jun 2022 Sebastian Junges, Matthijs T. J. Spaan

The key ideas to accelerate analysis of such programs are (1) to treat the behavior of the subroutine as uncertain and only remove this uncertainty by a detailed analysis if needed, and (2) to abstract similar subroutines into a parametric template, and then analyse this template.

Safe Reinforcement Learning via Shielding under Partial Observability

no code implementations2 Apr 2022 Steven Carr, Nils Jansen, Sebastian Junges, Ufuk Topcu

Safe exploration is a common problem in reinforcement learning (RL) that aims to prevent agents from making disastrous decisions while exploring their environment.

reinforcement-learning Reinforcement Learning (RL) +2

Convex Optimization for Parameter Synthesis in MDPs

1 code implementation30 Jun 2021 Murat Cubuktepe, Nils Jansen, Sebastian Junges, Joost-Pieter Katoen, Ufuk Topcu

The parameter synthesis problem is to compute an instantiation of these unspecified parameters such that the resulting MDP satisfies the temporal logic specification.

Collision Avoidance

Inductive Synthesis for Probabilistic Programs Reaches New Horizons

no code implementations29 Jan 2021 Roman Andriushchenko, Milan Ceska, Sebastian Junges, Joost-Pieter Katoen

The method builds on a novel inductive oracle that greedily generates counter-examples (CEs) for violating programs and uses them to prune the family.

Robust Finite-State Controllers for Uncertain POMDPs

no code implementations24 Sep 2020 Murat Cubuktepe, Nils Jansen, Sebastian Junges, Ahmadreza Marandi, Marnix Suilen, Ufuk Topcu

(3) We linearize this dual problem and (4) solve the resulting finite linear program to obtain locally optimal solutions to the original problem.

Collision Avoidance Motion Planning

Verification of indefinite-horizon POMDPs

1 code implementation30 Jun 2020 Alexander Bork, Sebastian Junges, Joost-Pieter Katoen, Tim Quatmann

This paper considers the verification problem for partially observable MDPs, in which the policies make their decisions based on (the history of) the observations emitted by the system.

Enforcing Almost-Sure Reachability in POMDPs

1 code implementation30 Jun 2020 Sebastian Junges, Nils Jansen, Sanjit A. Seshia

Partially-Observable Markov Decision Processes (POMDPs) are a well-known stochastic model for sequential decision making under limited information.

Decision Making reinforcement-learning +2

Counterexample-Driven Synthesis for Probabilistic Program Sketches

1 code implementation28 Apr 2019 Milan Češka, Christian Hensel, Sebastian Junges, Joost-Pieter Katoen

Probabilistic programs are key to deal with uncertainty in e. g. controller synthesis.

Shepherding Hordes of Markov Chains

1 code implementation15 Feb 2019 Milan Ceska, Nils Jansen, Sebastian Junges, Joost-Pieter Katoen

This paper considers large families of Markov chains (MCs) that are defined over a set of parameters with finite discrete domains.

The Partially Observable Games We Play for Cyber Deception

no code implementations28 Sep 2018 Mohamadreza Ahmadi, Murat Cubuktepe, Nils Jansen, Sebastian Junges, Joost-Pieter Katoen, Ufuk Topcu

Then, the deception problem is to compute a strategy for the deceiver that minimizes the expected cost of deception against all strategies of the infiltrator.

Safe Reinforcement Learning via Probabilistic Shields

no code implementations16 Jul 2018 Nils Jansen, Bettina Könighofer, Sebastian Junges, Alexandru C. Serban, Roderick Bloem

This paper targets the efficient construction of a safety shield for decision making in scenarios that incorporate uncertainty.

Decision Making reinforcement-learning +3

Synthesis in pMDPs: A Tale of 1001 Parameters

no code implementations5 Mar 2018 Murat Cubuktepe, Nils Jansen, Sebastian Junges, Joost-Pieter Katoen, Ufuk Topcu

This paper considers parametric Markov decision processes (pMDPs) whose transitions are equipped with affine functions over a finite set of parameters.

Strategy Synthesis in POMDPs via Game-Based Abstractions

no code implementations14 Aug 2017 Leonore Winterer, Sebastian Junges, Ralf Wimmer, Nils Jansen, Ufuk Topcu, Joost-Pieter Katoen, Bernd Becker

We study synthesis problems with constraints in partially observable Markov decision processes (POMDPs), where the objective is to compute a strategy for an agent that is guaranteed to satisfy certain safety and performance specifications.

Motion Planning

Cannot find the paper you are looking for? You can Submit a new open access paper.