no code implementations • 22 May 2023 • Jia Yu Tee, Oliver De Candido, Wolfgang Utschick, Philipp Geiger
Towards safe autonomous driving (AD), we consider the problem of learning models that accurately capture the diversity and tail quantiles of human driver behavior probability distributions, in interaction with an AD vehicle.
1 code implementation • 3 Mar 2022 • Philipp Geiger, Christoph-Nikolas Straehle
For flexible yet safe imitation learning (IL), we propose theory and a modular method, with a safety layer that enables a closed-form probability density/gradient of the safe generative continuous policy, end-to-end generative adversarial training, and worst-case safety guarantees.
2 code implementations • 17 Aug 2020 • Philipp Geiger, Christoph-Nikolas Straehle
For prediction of interacting agents' trajectories, we propose an end-to-end trainable architecture that hybridizes neural nets with game-theoretic reasoning, has interpretable intermediate representations, and transfers to downstream decision making.
no code implementations • 2 Mar 2020 • Jalal Etesami, Philipp Geiger
Learning from demonstrations (LfD) is an efficient paradigm to train AI agents.
no code implementations • 16 Mar 2018 • Philipp Geiger, Michel Besserve, Justus Winkelmann, Claudius Proissl, Bernhard Schölkopf
We study data-driven assistants that provide congestion forecasts to users of shared facilities (roads, cafeterias, etc.
no code implementations • 14 Jun 2016 • Philipp Geiger, Katja Hofmann, Bernhard Schölkopf
The amount of digitally available but heterogeneous information about the world is remarkable, and new technologies such as self-driving cars, smart homes, or the internet of things may further increase it.
no code implementations • 4 Mar 2016 • Philipp Geiger, Lucian Carata, Bernhard Schoelkopf
Cloud computing involves complex technical and economical systems and interactions.
no code implementations • 14 Nov 2014 • Philipp Geiger, Kun Zhang, Mingming Gong, Dominik Janzing, Bernhard Schölkopf
A widely applied approach to causal inference from a non-experimental time series $X$, often referred to as "(linear) Granger causal analysis", is to regress present on past and interpret the regression matrix $\hat{B}$ causally.