Search Results for author: Michael Herman

Found 6 papers, 0 papers with code

Inverse Reinforcement Learning with Simultaneous Estimation of Rewards and Dynamics

no code implementations13 Apr 2016 Michael Herman, Tobias Gindele, Jörg Wagner, Felix Schmitt, Wolfram Burgard

Inverse Reinforcement Learning (IRL) describes the problem of learning an unknown reward function of a Markov Decision Process (MDP) from observed behavior of an agent.

reinforcement-learning Reinforcement Learning (RL) +1

Hierarchical Recurrent Filtering for Fully Convolutional DenseNets

no code implementations5 Oct 2018 Jörg Wagner, Volker Fischer, Michael Herman, Sven Behnke

Generating a robust representation of the environment is a crucial ability of learning agents.

Functionally Modular and Interpretable Temporal Filtering for Robust Segmentation

no code implementations9 Oct 2018 Jörg Wagner, Volker Fischer, Michael Herman, Sven Behnke

Our filter module splits the filter task into multiple less complex and more interpretable subtasks.

Human Motion Trajectory Prediction: A Survey

no code implementations15 May 2019 Andrey Rudenko, Luigi Palmieri, Michael Herman, Kris M. Kitani, Dariu M. Gavrila, Kai O. Arras

With growing numbers of intelligent autonomous systems in human environments, the ability of such systems to perceive, understand and anticipate human behavior becomes increasingly important.

Trajectory Prediction

Pedestrian Behavior Prediction for Automated Driving: Requirements, Metrics, and Relevant Features

no code implementations15 Dec 2020 Michael Herman, Jörg Wagner, Vishnu Prabhakaran, Nicolas Möser, Hanna Ziesche, Waleed Ahmed, Lutz Bürkle, Ernst Kloppenburg, Claudius Gläser

In this paper, we thoroughly analyze the requirements on pedestrian behavior prediction for automated driving via a system-level approach.

Cannot find the paper you are looking for? You can Submit a new open access paper.