Search Results for author: Loic Matthey

Found 20 papers, 13 papers with code

Scaling Instructable Agents Across Many Simulated Worlds

no code implementations13 Mar 2024 SIMA Team, Maria Abi Raad, Arun Ahuja, Catarina Barros, Frederic Besse, Andrew Bolt, Adrian Bolton, Bethanie Brownfield, Gavin Buttimore, Max Cant, Sarah Chakera, Stephanie C. Y. Chan, Jeff Clune, Adrian Collister, Vikki Copeman, Alex Cullum, Ishita Dasgupta, Dario de Cesare, Julia Di Trapani, Yani Donchev, Emma Dunleavy, Martin Engelcke, Ryan Faulkner, Frankie Garcia, Charles Gbadamosi, Zhitao Gong, Lucy Gonzales, Kshitij Gupta, Karol Gregor, Arne Olav Hallingstad, Tim Harley, Sam Haves, Felix Hill, Ed Hirst, Drew A. Hudson, Jony Hudson, Steph Hughes-Fitt, Danilo J. Rezende, Mimi Jasarevic, Laura Kampis, Rosemary Ke, Thomas Keck, Junkyung Kim, Oscar Knagg, Kavya Kopparapu, Andrew Lampinen, Shane Legg, Alexander Lerchner, Marjorie Limont, YuLan Liu, Maria Loks-Thompson, Joseph Marino, Kathryn Martin Cussons, Loic Matthey, Siobhan Mcloughlin, Piermaria Mendolicchio, Hamza Merzic, Anna Mitenkova, Alexandre Moufarek, Valeria Oliveira, Yanko Oliveira, Hannah Openshaw, Renke Pan, Aneesh Pappu, Alex Platonov, Ollie Purkiss, David Reichert, John Reid, Pierre Harvey Richemond, Tyson Roberts, Giles Ruscoe, Jaume Sanchez Elias, Tasha Sandars, Daniel P. Sawyer, Tim Scholtes, Guy Simmons, Daniel Slater, Hubert Soyer, Heiko Strathmann, Peter Stys, Allison C. Tam, Denis Teplyashin, Tayfun Terzi, Davide Vercelli, Bojan Vujatovic, Marcus Wainwright, Jane X. Wang, Zhengdong Wang, Daan Wierstra, Duncan Williams, Nathaniel Wong, Sarah York, Nick Young

Building embodied AI systems that can follow arbitrary language instructions in any 3D environment is a key challenge for creating general AI.

Evaluating VLMs for Score-Based, Multi-Probe Annotation of 3D Objects

no code implementations29 Nov 2023 Rishabh Kabra, Loic Matthey, Alexander Lerchner, Niloy J. Mitra

Unlabeled 3D objects present an opportunity to leverage pretrained vision language models (VLMs) on a range of annotation tasks -- from describing object semantics to physical properties.

In-Context Learning Language Modelling +2

SIMONe: View-Invariant, Temporally-Abstracted Object Representations via Unsupervised Video Decomposition

1 code implementation NeurIPS 2021 Rishabh Kabra, Daniel Zoran, Goker Erdogan, Loic Matthey, Antonia Creswell, Matthew Botvinick, Alexander Lerchner, Christopher P. Burgess

Leveraging the shared structure that exists across different scenes, our model learns to infer two sets of latent representations from RGB video input alone: a set of "object" latents, corresponding to the time-invariant, object-level contents of the scene, as well as a set of "frame" latents, corresponding to global time-varying elements such as viewpoint.

Instance Segmentation Object +1

Unsupervised Model Selection for Variational Disentangled Representation Learning

no code implementations ICLR 2020 Sunny Duan, Loic Matthey, Andre Saraiva, Nicholas Watters, Christopher P. Burgess, Alexander Lerchner, Irina Higgins

Disentangled representations have recently been shown to improve fairness, data efficiency and generalisation in simple supervised and reinforcement learning tasks.

Attribute Disentanglement +2

Multi-Object Representation Learning with Iterative Variational Inference

6 code implementations1 Mar 2019 Klaus Greff, Raphaël Lopez Kaufman, Rishabh Kabra, Nick Watters, Chris Burgess, Daniel Zoran, Loic Matthey, Matthew Botvinick, Alexander Lerchner

Human perception is structured around objects which form the basis for our higher-level cognition and impressive systematic generalization abilities.

Object Representation Learning +3

Spatial Broadcast Decoder: A Simple Architecture for Learning Disentangled Representations in VAEs

2 code implementations21 Jan 2019 Nicholas Watters, Loic Matthey, Christopher P. Burgess, Alexander Lerchner

We present a simple neural rendering architecture that helps variational autoencoders (VAEs) learn disentangled representations.

Neural Rendering

Towards a Definition of Disentangled Representations

1 code implementation5 Dec 2018 Irina Higgins, David Amos, David Pfau, Sebastien Racaniere, Loic Matthey, Danilo Rezende, Alexander Lerchner

Here we propose that a principled solution to characterising disentangled representations can be found by focusing on the transformation properties of the world.

Representation Learning

Life-Long Disentangled Representation Learning with Cross-Domain Latent Homologies

1 code implementation NeurIPS 2018 Alessandro Achille, Tom Eccles, Loic Matthey, Christopher P. Burgess, Nick Watters, Alexander Lerchner, Irina Higgins

Intelligent behaviour in the real-world requires the ability to acquire new knowledge from an ongoing sequence of experiences while preserving and reusing past knowledge.

Representation Learning

Understanding disentangling in $β$-VAE

23 code implementations10 Apr 2018 Christopher P. Burgess, Irina Higgins, Arka Pal, Loic Matthey, Nick Watters, Guillaume Desjardins, Alexander Lerchner

We present new intuitions and theoretical assessments of the emergence of disentangled representation in variational autoencoders.

SCAN: Learning Hierarchical Compositional Visual Concepts

no code implementations ICLR 2018 Irina Higgins, Nicolas Sonnerat, Loic Matthey, Arka Pal, Christopher P. Burgess, Matko Bosnjak, Murray Shanahan, Matthew Botvinick, Demis Hassabis, Alexander Lerchner

SCAN learns concepts through fast symbol association, grounding them in disentangled visual primitives that are discovered in an unsupervised manner.

beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework

6 code implementations ICLR 2017 Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, Alexander Lerchner

Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial intelligence that is able to learn and reason in the same way that humans do.

Disentanglement

Early Visual Concept Learning with Unsupervised Deep Learning

1 code implementation17 Jun 2016 Irina Higgins, Loic Matthey, Xavier Glorot, Arka Pal, Benigno Uria, Charles Blundell, Shakir Mohamed, Alexander Lerchner

Automated discovery of early visual concepts from raw image data is a major open challenge in AI research.

Cannot find the paper you are looking for? You can Submit a new open access paper.