Search Results for author: Max Argus

Found 11 papers, 3 papers with code

Conditional Visual Servoing for Multi-Step Tasks

no code implementations17 May 2022 Sergio Izquierdo, Max Argus, Thomas Brox

Visual Servoing has been effectively used to move a robot into specific target locations or to track a recorded demonstration.

Contrastive Representation Learning for Hand Shape Estimation

no code implementations8 Jun 2021 Christian Zimmermann, Max Argus, Thomas Brox

This work presents improvements in monocular hand shape estimation by building on top of recent advances in unsupervised learning.

Contrastive Learning Representation Learning

Pre-training of Deep RL Agents for Improved Learning under Domain Randomization

no code implementations29 Apr 2021 Artemij Amiranashvili, Max Argus, Lukas Hermann, Wolfram Burgard, Thomas Brox

Visual domain randomization in simulated environments is a widely used method to transfer policies trained in simulation to real robots.

DeepMind

FlowControl: Optical Flow Based Visual Servoing

no code implementations1 Jul 2020 Max Argus, Lukas Hermann, Jon Long, Thomas Brox

One-shot imitation is the vision of robot programming from a single demonstration, rather than by tedious construction of computer code.

Optical Flow Estimation

Temporal Shift GAN for Large Scale Video Generation

1 code implementation4 Apr 2020 Andres Munoz, Mohammadreza Zolfaghari, Max Argus, Thomas Brox

In this paper, we present a network architecture for video generation that models spatio-temporal consistency without resorting to costly 3D architectures.

Video Generation

Adaptive Curriculum Generation from Demonstrations for Sim-to-Real Visuomotor Control

1 code implementation17 Oct 2019 Lukas Hermann, Max Argus, Andreas Eitel, Artemij Amiranashvili, Wolfram Burgard, Thomas Brox

We propose Adaptive Curriculum Generation from Demonstrations (ACGD) for reinforcement learning in the presence of sparse rewards.

reinforcement-learning

CrossNorm: On Normalization for Off-Policy Reinforcement Learning

no code implementations25 Sep 2019 Aditya Bhatt, Max Argus, Artemij Amiranashvili, Thomas Brox

Off-policy temporal difference (TD) methods are a powerful class of reinforcement learning (RL) algorithms.

reinforcement-learning

CrossNorm: Normalization for Off-Policy TD Reinforcement Learning

1 code implementation14 Feb 2019 Aditya Bhatt, Max Argus, Artemij Amiranashvili, Thomas Brox

Off-policy temporal difference (TD) methods are a powerful class of reinforcement learning (RL) algorithms.

reinforcement-learning

Cannot find the paper you are looking for? You can Submit a new open access paper.