Search Results for author: Rico Jonschkowski

Found 16 papers, 8 papers with code

Conditional Object-Centric Learning from Video

3 code implementations ICLR 2022 Thomas Kipf, Gamaleldin F. Elsayed, Aravindh Mahendran, Austin Stone, Sara Sabour, Georg Heigold, Rico Jonschkowski, Alexey Dosovitskiy, Klaus Greff

Object-centric representations are a promising path toward more systematic generalization by providing flexible abstractions upon which compositional world models can be built.

3D geometry Instance Segmentation +4

SMURF: Self-Teaching Multi-Frame Unsupervised RAFT with Full-Image Warping

2 code implementations CVPR 2021 Austin Stone, Daniel Maurer, Alper Ayvaci, Anelia Angelova, Rico Jonschkowski

We present SMURF, a method for unsupervised learning of optical flow that improves state of the art on all benchmarks by $36\%$ to $40\%$ (over the prior best method UFlow) and even outperforms several supervised approaches such as PWC-Net and FlowNet2.

Optical Flow Estimation

MT-Opt: Continuous Multi-Task Robotic Reinforcement Learning at Scale

no code implementations16 Apr 2021 Dmitry Kalashnikov, Jacob Varley, Yevgen Chebotar, Benjamin Swanson, Rico Jonschkowski, Chelsea Finn, Sergey Levine, Karol Hausman

In this paper, we study how a large-scale collective robotic learning system can acquire a repertoire of behaviors simultaneously, sharing exploration, experience, and representations across tasks.

reinforcement-learning Reinforcement Learning +1

Adaptive Intermediate Representations for Video Understanding

no code implementations14 Apr 2021 Juhana Kangaspunta, AJ Piergiovanni, Rico Jonschkowski, Michael Ryoo, Anelia Angelova

A common strategy to video understanding is to incorporate spatial and motion information by fusing features derived from RGB frames and optical flow.

Action Classification Optical Flow Estimation +3

The Distracting Control Suite -- A Challenging Benchmark for Reinforcement Learning from Pixels

4 code implementations7 Jan 2021 Austin Stone, Oscar Ramirez, Kurt Konolige, Rico Jonschkowski

Our experiments show that current RL methods for vision-based control perform poorly under distractions, and that their performance decreases with increasing distraction complexity, showing that new methods are needed to cope with the visual complexities of the real world.

reinforcement-learning Reinforcement Learning (RL)

Learning Object-Centric Video Models by Contrasting Sets

no code implementations20 Nov 2020 Sindy Löwe, Klaus Greff, Rico Jonschkowski, Alexey Dosovitskiy, Thomas Kipf

We address this problem by introducing a global, set-based contrastive loss: instead of contrasting individual slot representations against one another, we aggregate the representations and contrast the joined sets against one another.

Future prediction Object +1

A Geometric Perspective on Self-Supervised Policy Adaptation

no code implementations14 Nov 2020 Cristian Bodnar, Karol Hausman, Gabriel Dulac-Arnold, Rico Jonschkowski

One of the most challenging aspects of real-world reinforcement learning (RL) is the multitude of unpredictable and ever-changing distractions that could divert an agent from what was tasked to do in its training environment.

Reinforcement Learning (RL)

What Matters in Unsupervised Optical Flow

5 code implementations ECCV 2020 Rico Jonschkowski, Austin Stone, Jonathan T. Barron, Ariel Gordon, Kurt Konolige, Anelia Angelova

We systematically compare and analyze a set of key components in unsupervised optical flow to identify which photometric loss, occlusion handling, and smoothness regularization is most effective.

Occlusion Handling Optical Flow Estimation

Differentiable Mapping Networks: Learning Structured Map Representations for Sparse Visual Localization

no code implementations19 May 2020 Peter Karkus, Anelia Angelova, Vincent Vanhoucke, Rico Jonschkowski

We address these tasks by combining spatial structure (differentiable mapping) and end-to-end learning in a novel neural network architecture: the Differentiable Mapping Network (DMN).

Visual Localization

Towards Differentiable Resampling

no code implementations24 Apr 2020 Michael Zhu, Kevin Murphy, Rico Jonschkowski

Resampling is a key component of sample-based recursive state estimation in particle filters.

KeyPose: Multi-View 3D Labeling and Keypoint Estimation for Transparent Objects

1 code implementation CVPR 2020 Xingyu Liu, Rico Jonschkowski, Anelia Angelova, Kurt Konolige

We address two problems: first, we establish an easy method for capturing and labeling 3D keypoints on desktop objects with an RGB camera; and second, we develop a deep neural network, called $KeyPose$, that learns to accurately predict object poses using 3D keypoints, from stereo input, and works even for transparent objects.

3D Pose Estimation Keypoint Estimation +1

Depth from Videos in the Wild: Unsupervised Monocular Depth Learning from Unknown Cameras

4 code implementations ICCV 2019 Ariel Gordon, Hanhan Li, Rico Jonschkowski, Anelia Angelova

We present a novel method for simultaneous learning of depth, egomotion, object motion, and camera intrinsics from monocular videos, using only consistency across neighboring video frames as supervision signal.

Depth Prediction Monocular Depth Estimation +1

Differentiable Particle Filters: End-to-End Learning with Algorithmic Priors

3 code implementations28 May 2018 Rico Jonschkowski, Divyam Rastogi, Oliver Brock

We present differentiable particle filters (DPFs): a differentiable implementation of the particle filter algorithm with learnable motion and measurement models.

PVEs: Position-Velocity Encoders for Unsupervised Learning of Structured State Representations

no code implementations27 May 2017 Rico Jonschkowski, Roland Hafner, Jonathan Scholz, Martin Riedmiller

We propose position-velocity encoders (PVEs) which learn---without supervision---to encode images to positions and velocities of task-relevant objects.

Image Reconstruction Position

Patterns for Learning with Side Information

1 code implementation19 Nov 2015 Rico Jonschkowski, Sebastian Höfer, Oliver Brock

Supervised, semi-supervised, and unsupervised learning estimate a function given input/output samples.

Multi-Task Learning MULTI-VIEW LEARNING

Cannot find the paper you are looking for? You can Submit a new open access paper.