1 code implementation • 16 Nov 2023 • Katharina Stein, Daniel Fišer, Jörg Hoffmann, Alexander Koller
LLMs are being increasingly used for planning-style tasks, but their capabilities for planning and reasoning are poorly understood.
1 code implementation • 13 Jun 2022 • Maria Christakis, Hasan Ferit Eniser, Jörg Hoffmann, Adish Singla, Valentin Wüstholz
Here, we show the wide applicability of $k$-safety properties for machine-learning models and present the first specification language for expressing them.
1 code implementation • 17 Mar 2022 • Stefan Borgwardt, Jörg Hoffmann, Alisa Kovtunova, Markus Krötzsch, Bernhard Nebel, Marcel Steinmetz
State constraints in AI Planning globally restrict the legal environment states.
no code implementations • 5 Jul 2021 • Alfred Ultsch, Jörg Hoffmann, Maximilian Röhnert, Malte Von Bonin, Uta Oelschlägel, Cornelia Brendel, Michael C. Thrun
A comparison to a selection of state of the art explainable AI systems shows that ALPODS operates efficiently on known benchmark data and also on everyday routine case data.
no code implementations • 19 Nov 2020 • Rebecca Eifler, Jörg Hoffmann
Adopting the recent approach to answer such questions in terms of plan-property dependencies, here we implement a tool and user interface for human-guided iterative planning including plan-space explanations.
no code implementations • COLING 2020 • Arne Köhn, Julia Wichlacz, Álvaro Torralba, Daniel Höller, Jörg Hoffmann, Alexander Koller
When generating technical instructions, it is often convenient to describe complex objects in the world at different levels of abstraction.
no code implementations • 3 Aug 2020 • Timo P. Gros, Daniel Höller, Jörg Hoffmann, Verena Wolf
Our evaluations show that for this sequential decision making problem, deep reinforcement learning performs best in many aspects even though for imitation learning optimal decisions are considered.
no code implementations • 15 May 2017 • Patrick Speicher, Marcel Steinmetz, Jörg Hoffmann, Michael Backes, Robert Künnemann
Penetration testing is a well-established practical concept for the identification of potentially exploitable security weaknesses and an important component of a security audit.
no code implementations • 15 Jan 2014 • Jörg Hoffmann, Piergiorgio Bertoli, Malte Helmert, Marco Pistore
The special case, which we term "forward effects", is characterized by the fact that every ramification of a web service application involves at least one new constant generated as output by the web service.