Search Results for author: William F. Whitney

Found 8 papers, 4 papers with code

Offline RL Without Off-Policy Evaluation

no code implementations NeurIPS 2021 David Brandfonbrener, William F. Whitney, Rajesh Ranganath, Joan Bruna

Most prior approaches to offline reinforcement learning (RL) have taken an iterative actor-critic approach involving off-policy evaluation.

Offline RL

Evaluating representations by the complexity of learning low-loss predictors

1 code implementation15 Sep 2020 William F. Whitney, Min Jae Song, David Brandfonbrener, Jaan Altosaar, Kyunghyun Cho

We consider the problem of evaluating representations of data for use in solving a downstream task.

Disentangling Video with Independent Prediction

no code implementations17 Jan 2019 William F. Whitney, Rob Fergus

We propose an unsupervised variational model for disentangling video into independent factors, i. e. each factor's future can be predicted from its past without considering the others.

Model-Based Planning with Discrete and Continuous Actions

1 code implementation19 May 2017 Mikael Henaff, William F. Whitney, Yann Lecun

Action planning using learned and differentiable forward models of the world is a general approach which has a number of desirable properties, including improved sample complexity over model-free RL methods, reuse of learned models across different tasks, and the ability to perform efficient gradient-based optimization in continuous action spaces.

Understanding Visual Concepts with Continuation Learning

no code implementations22 Feb 2016 William F. Whitney, Michael Chang, tejas kulkarni, Joshua B. Tenenbaum

We introduce a neural network architecture and a learning algorithm to produce factorized symbolic representations.

Atari Games

Cannot find the paper you are looking for? You can Submit a new open access paper.