no code implementations • 3 Mar 2022 • Philipp Geiger, Christoph-Nikolas Straehle
For flexible yet safe imitation learning (IL), we propose a modular approach that uses a generative imitator policy with a safety layer, has an overall explicit density/gradient, can therefore be end-to-end trained using generative adversarial IL (GAIL), and comes with theoretical worst-case safety/robustness guarantees.
2 code implementations • 17 Aug 2020 • Philipp Geiger, Christoph-Nikolas Straehle
For prediction of interacting agents' trajectories, we propose an end-to-end trainable architecture that hybridizes neural nets with game-theoretic reasoning, has interpretable intermediate representations, and transfers to downstream decision making.
no code implementations • 2 Mar 2020 • Jalal Etesami, Philipp Geiger
Learning from demonstrations (LfD) is an efficient paradigm to train AI agents.
no code implementations • 16 Mar 2018 • Philipp Geiger, Michel Besserve, Justus Winkelmann, Claudius Proissl, Bernhard Schölkopf
We study data-driven assistants that provide congestion forecasts to users of shared facilities (roads, cafeterias, etc.
no code implementations • 14 Jun 2016 • Philipp Geiger, Katja Hofmann, Bernhard Schölkopf
The amount of digitally available but heterogeneous information about the world is remarkable, and new technologies such as self-driving cars, smart homes, or the internet of things may further increase it.
no code implementations • 4 Mar 2016 • Philipp Geiger, Lucian Carata, Bernhard Schoelkopf
Cloud computing involves complex technical and economical systems and interactions.
no code implementations • 14 Nov 2014 • Philipp Geiger, Kun Zhang, Mingming Gong, Dominik Janzing, Bernhard Schölkopf
A widely applied approach to causal inference from a non-experimental time series $X$, often referred to as "(linear) Granger causal analysis", is to regress present on past and interpret the regression matrix $\hat{B}$ causally.