no code implementations • 28 Jun 2024 • William F. Whitney, Jacob Varley, Deepali Jain, Krzysztof Choromanski, Sumeet Singh, Vikas Sindhwani
We present High-Density Visual Particle Dynamics (HD-VPD), a learned world model that can emulate the physical dynamics of real scenes by processing massive latent point clouds containing 100K+ particles.
no code implementations • 22 May 2024 • Yulia Rubanova, Tatiana Lopez-Guevara, Kelsey R. Allen, William F. Whitney, Kimberly Stachenfeld, Tobias Pfaff
Simulating large scenes with many rigid objects is crucial for a variety of applications, such as robotics, engineering, film and video games.
no code implementations • 22 Jan 2024 • Tatiana Lopez-Guevara, Yulia Rubanova, William F. Whitney, Tobias Pfaff, Kimberly Stachenfeld, Kelsey R. Allen
Accurately simulating real world object dynamics is essential for various applications such as robotics, engineering, graphics, and design.
no code implementations • 8 Dec 2023 • William F. Whitney, Tatiana Lopez-Guevara, Tobias Pfaff, Yulia Rubanova, Thomas Kipf, Kimberly Stachenfeld, Kelsey R. Allen
Realistic simulation is critical for applications ranging from robotics to animation.
no code implementations • 14 Sep 2023 • Cristina Pinneri, Sarah Bechtle, Markus Wulfmeier, Arunkumar Byravan, Jingwei Zhang, William F. Whitney, Martin Riedmiller
We present a novel approach to address the challenge of generalization in offline reinforcement learning (RL), where the agent learns from a fixed dataset without any additional interaction with the environment.
no code implementations • 2 Dec 2021 • David Brandfonbrener, William F. Whitney, Rajesh Ranganath, Joan Bruna
We introduce quantile filtered imitation learning (QFIL), a novel policy improvement operator designed for offline reinforcement learning.
1 code implementation • NeurIPS 2021 • David Brandfonbrener, William F. Whitney, Rajesh Ranganath, Joan Bruna
In addition, we hypothesize that the strong performance of the one-step algorithm is due to a combination of favorable structure in the environment and behavior policy.
no code implementations • 23 Jan 2021 • William F. Whitney, Michael Bloesch, Jost Tobias Springenberg, Abbas Abdolmaleki, Kyunghyun Cho, Martin Riedmiller
This causes BBE to be actively detrimental to policy learning in many control tasks.
1 code implementation • 15 Sep 2020 • William F. Whitney, Min Jae Song, David Brandfonbrener, Jaan Altosaar, Kyunghyun Cho
We consider the problem of evaluating representations of data for use in solving a downstream task.
1 code implementation • 27 Jun 2020 • David Brandfonbrener, William F. Whitney, Rajesh Ranganath, Joan Bruna
We show that this discrepancy is due to the \emph{action-stability} of their objectives.
no code implementations • 17 Jan 2019 • William F. Whitney, Rob Fergus
We propose an unsupervised variational model for disentangling video into independent factors, i. e. each factor's future can be predicted from its past without considering the others.
1 code implementation • 19 May 2017 • Mikael Henaff, William F. Whitney, Yann Lecun
Action planning using learned and differentiable forward models of the world is a general approach which has a number of desirable properties, including improved sample complexity over model-free RL methods, reuse of learned models across different tasks, and the ability to perform efficient gradient-based optimization in continuous action spaces.
1 code implementation • 21 Feb 2017 • Vlad Firoiu, William F. Whitney, Joshua B. Tenenbaum
There has been a recent explosion in the capabilities of game-playing artificial intelligence.
no code implementations • 22 Feb 2016 • William F. Whitney, Michael Chang, tejas kulkarni, Joshua B. Tenenbaum
We introduce a neural network architecture and a learning algorithm to produce factorized symbolic representations.