Search Results for author: Andrew Jaegle

Found 24 papers, 12 papers with code

Extended Intelligence

no code implementations15 Sep 2022 David L Barack, Andrew Jaegle

We argue that intelligence, construed as the disposition to perform tasks successfully, is a property of systems composed of agents and their contexts.

HiP: Hierarchical Perceiver

2 code implementations22 Feb 2022 Joao Carreira, Skanda Koppula, Daniel Zoran, Adria Recasens, Catalin Ionescu, Olivier Henaff, Evan Shelhamer, Relja Arandjelovic, Matt Botvinick, Oriol Vinyals, Karen Simonyan, Andrew Zisserman, Andrew Jaegle

This however hinders them from scaling up to the inputs sizes required to process raw high-resolution images or video.

SyMetric: Measuring the Quality of Learnt Hamiltonian Dynamics Inferred from Vision

1 code implementation NeurIPS 2021 Irina Higgins, Peter Wirnsberger, Andrew Jaegle, Aleksandar Botev

Using SyMetric, we identify a set of architectural choices that significantly improve the performance of a previously proposed model for inferring latent dynamics from pixels, the Hamiltonian Generative Network (HGN).

Autonomous Driving Image Reconstruction

Which priors matter? Benchmarking models for learning latent dynamics

2 code implementations9 Nov 2021 Aleksandar Botev, Andrew Jaegle, Peter Wirnsberger, Daniel Hennes, Irina Higgins

Learning dynamics is at the heart of many important applications of machine learning (ML), such as robotics and autonomous driving.

Autonomous Driving Benchmarking

Imitation by Predicting Observations

no code implementations8 Jul 2021 Andrew Jaegle, Yury Sulsky, Arun Ahuja, Jake Bruce, Rob Fergus, Greg Wayne

Imitation learning enables agents to reuse and adapt the hard-won expertise of others, offering a solution to several key challenges in learning behavior.

Continuous Control Imitation Learning

Perceiver: General Perception with Iterative Attention

10 code implementations4 Mar 2021 Andrew Jaegle, Felix Gimeno, Andrew Brock, Andrew Zisserman, Oriol Vinyals, Joao Carreira

The perception models used in deep learning on the other hand are designed for individual modalities, often relying on domain-specific assumptions such as the local grid structures exploited by virtually all existing vision models.

3D Point Cloud Classification Audio Classification +1

Beyond Tabula-Rasa: a Modular Reinforcement Learning Approach for Physically Embedded 3D Sokoban

no code implementations3 Oct 2020 Peter Karkus, Mehdi Mirza, Arthur Guez, Andrew Jaegle, Timothy Lillicrap, Lars Buesing, Nicolas Heess, Theophane Weber

We explore whether integrated tasks like Mujoban can be solved by composing RL modules together in a sense-plan-act hierarchy, where modules have well-defined roles similarly to classic robot architectures.

reinforcement-learning Reinforcement Learning (RL)

Keyframing the Future: Discovering Temporal Hierarchy with Keyframe-Inpainter Prediction

no code implementations25 Sep 2019 Karl Pertsch, Oleh Rybkin, Jingyun Yang, Konstantinos G. Derpanis, Kostas Daniilidis, Joseph J. Lim, Andrew Jaegle

To flexibly and efficiently reason about temporal sequences, abstract representations that compactly represent the important information in the sequence are needed.

Temporal Sequences

Codes, Functions, and Causes: A Critique of Brette's Conceptual Analysis of Coding

no code implementations18 Apr 2019 David Barack, Andrew Jaegle

Here, we argue that Brette's conceptual analysis mischaracterizes the structure of causal claims in coding and other forms of analysis-by-decomposition.

Learning what you can do before doing anything

no code implementations ICLR 2019 Oleh Rybkin, Karl Pertsch, Konstantinos G. Derpanis, Kostas Daniilidis, Andrew Jaegle

We introduce a loss term that encourages the network to capture the composability of visual sequences and show that it leads to representations that disentangle the structure of actions.

Video Prediction

Predicting the Future with Transformational States

no code implementations26 Mar 2018 Andrew Jaegle, Oleh Rybkin, Konstantinos G. Derpanis, Kostas Daniilidis

We couple this latent state with a recurrent neural network (RNN) core that predicts future frames by transforming past states into future states by applying the accumulated state transformation with a learned operator.

Understanding image motion with group representations

no code implementations ICLR 2018 Andrew Jaegle, Stephen Phillips, Daphne Ippolito, Kostas Daniilidis

Our results demonstrate that this representation is useful for learning motion in the general setting where explicit labels are difficult to obtain.

Fast, Robust, Continuous Monocular Egomotion Computation

1 code implementation16 Feb 2016 Andrew Jaegle, Stephen Phillips, Kostas Daniilidis

We propose robust methods for estimating camera egomotion in noisy, real-world monocular image sequences in the general case of unknown observer rotation and translation with two views and a small baseline.

counterfactual Motion Estimation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.