Search Results for author: Ryan Faulkner

Found 10 papers, 6 papers with code

Semantic Segmentation on 3D Point Clouds with High Density Variations

no code implementations4 Jul 2023 Ryan Faulkner, Luke Haub, Simon Ratcliffe, Ian Reid, Tat-Jun Chin

LiDAR scanning for surveying applications acquire measurements over wide areas and long distances, which produces large-scale 3D point clouds with significant local density variations.

3D Semantic Segmentation

Solving Reasoning Tasks with a Slot Transformer

no code implementations20 Oct 2022 Ryan Faulkner, Daniel Zoran

The ability to carve the world into useful abstractions in order to reason about time and space is a crucial component of intelligence.

Representation Learning Variational Inference

Rapid Task-Solving in Novel Environments

no code implementations ICLR 2021 Sam Ritter, Ryan Faulkner, Laurent Sartran, Adam Santoro, Matt Botvinick, David Raposo

We show that EPNs learn to execute a value iteration-like planning algorithm and that they generalize to situations beyond their training experience.


Generalization of Reinforcement Learners with Working and Episodic Memory

1 code implementation NeurIPS 2019 Meire Fortunato, Melissa Tan, Ryan Faulkner, Steven Hansen, Adrià Puigdomènech Badia, Gavin Buttimore, Charlie Deck, Joel Z. Leibo, Charles Blundell

In this paper, we aim to develop a comprehensive methodology to test different kinds of memory in an agent and assess how well the agent can apply what it learns in training to a holdout set that differs from the training set along dimensions that we suggest are relevant for evaluating memory-specific generalization.

Holdout Set

Dyna Planning using a Feature Based Generative Model

no code implementations23 May 2018 Ryan Faulkner, Doina Precup

Dyna-style reinforcement learning is a powerful approach for problems where not much real data is available.

Reinforcement Learning (RL)

Grounded Language Learning in a Simulated 3D World

1 code implementation20 Jun 2017 Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom

Trained via a combination of reinforcement and unsupervised learning, and beginning with minimal prior knowledge, the agent learns to relate linguistic symbols to emergent perceptual representations of its physical surroundings and to pertinent sequences of actions.

Grounded language learning

Cannot find the paper you are looking for? You can Submit a new open access paper.