Search Results for author: William F. Whitney

Found 14 papers, 5 papers with code

Modeling the Real World with High-Density Visual Particle Dynamics

no code implementations28 Jun 2024 William F. Whitney, Jacob Varley, Deepali Jain, Krzysztof Choromanski, Sumeet Singh, Vikas Sindhwani

We present High-Density Visual Particle Dynamics (HD-VPD), a learned world model that can emulate the physical dynamics of real scenes by processing massive latent point clouds containing 100K+ particles.

Graph Neural Network

Learning rigid-body simulators over implicit shapes for large-scale scenes and vision

no code implementations22 May 2024 Yulia Rubanova, Tatiana Lopez-Guevara, Kelsey R. Allen, William F. Whitney, Kimberly Stachenfeld, Tobias Pfaff

Simulating large scenes with many rigid objects is crucial for a variety of applications, such as robotics, engineering, film and video games.

Scaling Face Interaction Graph Networks to Real World Scenes

no code implementations22 Jan 2024 Tatiana Lopez-Guevara, Yulia Rubanova, William F. Whitney, Tobias Pfaff, Kimberly Stachenfeld, Kelsey R. Allen

Accurately simulating real world object dynamics is essential for various applications such as robotics, engineering, graphics, and design.

Friction

Equivariant Data Augmentation for Generalization in Offline Reinforcement Learning

no code implementations14 Sep 2023 Cristina Pinneri, Sarah Bechtle, Markus Wulfmeier, Arunkumar Byravan, Jingwei Zhang, William F. Whitney, Martin Riedmiller

We present a novel approach to address the challenge of generalization in offline reinforcement learning (RL), where the agent learns from a fixed dataset without any additional interaction with the environment.

Data Augmentation Offline RL +3

Quantile Filtered Imitation Learning

no code implementations2 Dec 2021 David Brandfonbrener, William F. Whitney, Rajesh Ranganath, Joan Bruna

We introduce quantile filtered imitation learning (QFIL), a novel policy improvement operator designed for offline reinforcement learning.

D4RL Imitation Learning

Offline RL Without Off-Policy Evaluation

1 code implementation NeurIPS 2021 David Brandfonbrener, William F. Whitney, Rajesh Ranganath, Joan Bruna

In addition, we hypothesize that the strong performance of the one-step algorithm is due to a combination of favorable structure in the environment and behavior policy.

D4RL Offline RL +1

Evaluating representations by the complexity of learning low-loss predictors

1 code implementation15 Sep 2020 William F. Whitney, Min Jae Song, David Brandfonbrener, Jaan Altosaar, Kyunghyun Cho

We consider the problem of evaluating representations of data for use in solving a downstream task.

Disentangling Video with Independent Prediction

no code implementations17 Jan 2019 William F. Whitney, Rob Fergus

We propose an unsupervised variational model for disentangling video into independent factors, i. e. each factor's future can be predicted from its past without considering the others.

Model-Based Planning with Discrete and Continuous Actions

1 code implementation19 May 2017 Mikael Henaff, William F. Whitney, Yann Lecun

Action planning using learned and differentiable forward models of the world is a general approach which has a number of desirable properties, including improved sample complexity over model-free RL methods, reuse of learned models across different tasks, and the ability to perform efficient gradient-based optimization in continuous action spaces.

Understanding Visual Concepts with Continuation Learning

no code implementations22 Feb 2016 William F. Whitney, Michael Chang, tejas kulkarni, Joshua B. Tenenbaum

We introduce a neural network architecture and a learning algorithm to produce factorized symbolic representations.

Atari Games

Cannot find the paper you are looking for? You can Submit a new open access paper.