3 code implementations • ICLR 2022 • Thomas Kipf, Gamaleldin F. Elsayed, Aravindh Mahendran, Austin Stone, Sara Sabour, Georg Heigold, Rico Jonschkowski, Alexey Dosovitskiy, Klaus Greff
Object-centric representations are a promising path toward more systematic generalization by providing flexible abstractions upon which compositional world models can be built.
2 code implementations • CVPR 2021 • Austin Stone, Daniel Maurer, Alper Ayvaci, Anelia Angelova, Rico Jonschkowski
We present SMURF, a method for unsupervised learning of optical flow that improves state of the art on all benchmarks by $36\%$ to $40\%$ (over the prior best method UFlow) and even outperforms several supervised approaches such as PWC-Net and FlowNet2.
no code implementations • 16 Apr 2021 • Dmitry Kalashnikov, Jacob Varley, Yevgen Chebotar, Benjamin Swanson, Rico Jonschkowski, Chelsea Finn, Sergey Levine, Karol Hausman
In this paper, we study how a large-scale collective robotic learning system can acquire a repertoire of behaviors simultaneously, sharing exploration, experience, and representations across tasks.
no code implementations • 14 Apr 2021 • Juhana Kangaspunta, AJ Piergiovanni, Rico Jonschkowski, Michael Ryoo, Anelia Angelova
A common strategy to video understanding is to incorporate spatial and motion information by fusing features derived from RGB frames and optical flow.
Ranked #5 on Action Classification on Toyota Smarthome dataset
4 code implementations • 7 Jan 2021 • Austin Stone, Oscar Ramirez, Kurt Konolige, Rico Jonschkowski
Our experiments show that current RL methods for vision-based control perform poorly under distractions, and that their performance decreases with increasing distraction complexity, showing that new methods are needed to cope with the visual complexities of the real world.
no code implementations • 20 Nov 2020 • Sindy Löwe, Klaus Greff, Rico Jonschkowski, Alexey Dosovitskiy, Thomas Kipf
We address this problem by introducing a global, set-based contrastive loss: instead of contrasting individual slot representations against one another, we aggregate the representations and contrast the joined sets against one another.
no code implementations • 14 Nov 2020 • Cristian Bodnar, Karol Hausman, Gabriel Dulac-Arnold, Rico Jonschkowski
One of the most challenging aspects of real-world reinforcement learning (RL) is the multitude of unpredictable and ever-changing distractions that could divert an agent from what was tasked to do in its training environment.
5 code implementations • ECCV 2020 • Rico Jonschkowski, Austin Stone, Jonathan T. Barron, Ariel Gordon, Kurt Konolige, Anelia Angelova
We systematically compare and analyze a set of key components in unsupervised optical flow to identify which photometric loss, occlusion handling, and smoothness regularization is most effective.
Ranked #5 on Optical Flow Estimation on Sintel Clean unsupervised
no code implementations • 19 May 2020 • Peter Karkus, Anelia Angelova, Vincent Vanhoucke, Rico Jonschkowski
We address these tasks by combining spatial structure (differentiable mapping) and end-to-end learning in a novel neural network architecture: the Differentiable Mapping Network (DMN).
no code implementations • 24 Apr 2020 • Michael Zhu, Kevin Murphy, Rico Jonschkowski
Resampling is a key component of sample-based recursive state estimation in particle filters.
1 code implementation • CVPR 2020 • Xingyu Liu, Rico Jonschkowski, Anelia Angelova, Kurt Konolige
We address two problems: first, we establish an easy method for capturing and labeling 3D keypoints on desktop objects with an RGB camera; and second, we develop a deep neural network, called $KeyPose$, that learns to accurately predict object poses using 3D keypoints, from stereo input, and works even for transparent objects.
no code implementations • 17 Sep 2019 • Rico Jonschkowski, Austin Stone
We present a novel approach to weakly supervised object detection.
4 code implementations • ICCV 2019 • Ariel Gordon, Hanhan Li, Rico Jonschkowski, Anelia Angelova
We present a novel method for simultaneous learning of depth, egomotion, object motion, and camera intrinsics from monocular videos, using only consistency across neighboring video frames as supervision signal.
Ranked #11 on Unsupervised Monocular Depth Estimation on Cityscapes
3 code implementations • 28 May 2018 • Rico Jonschkowski, Divyam Rastogi, Oliver Brock
We present differentiable particle filters (DPFs): a differentiable implementation of the particle filter algorithm with learnable motion and measurement models.
no code implementations • 27 May 2017 • Rico Jonschkowski, Roland Hafner, Jonathan Scholz, Martin Riedmiller
We propose position-velocity encoders (PVEs) which learn---without supervision---to encode images to positions and velocities of task-relevant objects.
1 code implementation • 19 Nov 2015 • Rico Jonschkowski, Sebastian Höfer, Oliver Brock
Supervised, semi-supervised, and unsupervised learning estimate a function given input/output samples.