Search Results for author: Wilko Schwarting

Found 15 papers, 5 papers with code

Growing Q-Networks: Solving Continuous Control Tasks with Adaptive Control Resolution

no code implementations5 Apr 2024 Tim Seyde, Peter Werner, Wilko Schwarting, Markus Wulfmeier, Daniela Rus

Recent reinforcement learning approaches have shown surprisingly strong capabilities of bang-bang policies for solving continuous control benchmarks.

Continuous Control Q-Learning

OptFlow: Fast Optimization-based Scene Flow Estimation without Supervision

no code implementations4 Jan 2024 Rahul Ahuja, Chris Baker, Wilko Schwarting

Without relying on learning or any labeled datasets, OptFlow achieves state-of-the-art performance for scene flow estimation on popular autonomous driving benchmarks.

Autonomous Driving Point Cloud Registration +2

Solving Continuous Control via Q-learning

1 code implementation22 Oct 2022 Tim Seyde, Peter Werner, Wilko Schwarting, Igor Gilitschenski, Martin Riedmiller, Daniela Rus, Markus Wulfmeier

While there has been substantial success for solving continuous control with actor-critic methods, simpler critic-only methods such as Q-learning find limited application in the associated high-dimensional action spaces.

Continuous Control Multi-agent Reinforcement Learning +1

Deep Interactive Motion Prediction and Planning: Playing Games with Motion Prediction Models

no code implementations5 Apr 2022 Jose L. Vazquez, Alexander Liniger, Wilko Schwarting, Daniela Rus, Luc van Gool

Fundamental to the success of our method is the design of a novel multi-agent policy network that can steer a vehicle given the state of the surrounding agents and the map information.

motion prediction

Deep Latent Competition: Learning to Race Using Visual Control Policies in Latent Space

1 code implementation19 Feb 2021 Wilko Schwarting, Tim Seyde, Igor Gilitschenski, Lucas Liebenwein, Ryan Sander, Sertac Karaman, Daniela Rus

We demonstrate the effectiveness of our algorithm in learning competitive behaviors on a novel multi-agent racing benchmark that requires planning from image observations.

Reinforcement Learning (RL)

Deep Orientation Uncertainty Learning based on a Bingham Loss

1 code implementation ICLR 2020 Igor Gilitschenski, Roshni Sahoo, Wilko Schwarting, Alexander Amini, Sertac Karaman, Daniela Rus

Reasoning about uncertain orientations is one of the core problems in many perception tasks such as object pose estimation or motion estimation.

Motion Estimation Pose Estimation

Deep Evidential Regression

4 code implementations NeurIPS 2020 Alexander Amini, Wilko Schwarting, Ava Soleimany, Daniela Rus

We demonstrate learning well-calibrated measures of uncertainty on various benchmarks, scaling to complex computer vision tasks, as well as robustness to adversarial and OOD test samples.

regression

Deep Evidential Uncertainty

no code implementations25 Sep 2019 Alexander Amini, Wilko Schwarting, Ava Soleimany, Daniela Rus

In this paper, we propose a novel method for training deterministic NNs to not only estimate the desired target but also the associated evidence in support of that target.

regression

Training Support Vector Machines using Coresets

no code implementations13 Aug 2017 Cenk Baykal, Lucas Liebenwein, Wilko Schwarting

We present a novel coreset construction algorithm for solving classification tasks using Support Vector Machines (SVMs) in a computationally efficient manner.

Cannot find the paper you are looking for? You can Submit a new open access paper.