1 code implementation • 19 Oct 2023 • Felix Ocker, Jörg Deigmöller, Julian Eggert
This paper shows that the technique used for knowledge extraction can be applied to populate a minimalist ontology, showcasing the potential of LLMs in synergy with formal knowledge representation.
no code implementations • 20 Jul 2023 • Tim Puphal, Julian Eggert
In this paper, we develop risk shadowing, a situation understanding method that allows us to go beyond single interactions by analyzing group interactions between three agents.
no code implementations • 10 May 2023 • Frank Joublin, Antonello Ceravola, Joerg Deigmoeller, Michael Gienger, Mathias Franzius, Julian Eggert
Large language models (LLMs) have recently become a popular topic in the field of Artificial Intelligence (AI) research, with companies such as Google, Amazon, Facebook, Amazon, Tesla, and Apple (GAFA) investing heavily in their development.
no code implementations • 14 Mar 2023 • Julian Eggert, Tim Puphal
In this paper, we compare three different model-based risk measures by evaluating their stengths and weaknesses qualitatively and testing them quantitatively on a set of real longitudinal and intersection scenarios.
no code implementations • 13 Mar 2023 • Florian Damerow, Yuda Li, Tim Puphal, Benedict Flade, Julian Eggert
Here, we concentrate on intersection scenarios that are difficult to access visually.
no code implementations • 13 Mar 2023 • Tim Puphal, Malte Probst, Julian Eggert
Risk assessment is a central element for the development and validation of Autonomous Vehicles (AV).
no code implementations • 13 Mar 2023 • Tim Puphal, Malte Probst, Yiyang Li, Yosuke Sakamoto, Julian Eggert
We consider the problem of correct motion planning for T-intersection merge-ins of arbitrary geometry and vehicle density.
no code implementations • 13 Mar 2023 • Tim Puphal, Raphael Wenzel, Benedict Flade, Malte Probst, Julian Eggert
Based on the results, we can further derive a novel filter architecture with multiple filter steps, for which risk models are recommended for each step, to further improve the robustness.