Search Results for author: Justin Bayer

Found 30 papers, 5 papers with code

PRISM: Probabilistic Real-Time Inference in Spatial World Models

no code implementations6 Dec 2022 Atanas Mirchev, Baris Kayalibay, Ahmed Agha, Patrick van der Smagt, Daniel Cremers, Justin Bayer

We introduce PRISM, a method for real-time filtering in a probabilistic generative model of agent motion and visual perception.

Bayesian Inference

Tracking and Planning with Spatial World Models

no code implementations25 Jan 2022 Baris Kayalibay, Atanas Mirchev, Patrick van der Smagt, Justin Bayer

We introduce a method for real-time navigation and tracking with differentiably rendered world models.

Pose Estimation

Mind the Gap when Conditioning Amortised Inference in Sequential Latent-Variable Models

no code implementations ICLR 2021 Justin Bayer, Maximilian Soelch, Atanas Mirchev, Baris Kayalibay, Patrick van der Smagt

Amortised inference enables scalable learning of sequential latent-variable models (LVMs) with the evidence lower bound (ELBO).

Variational State-Space Models for Localisation and Dense 3D Mapping in 6 DoF

no code implementations ICLR 2021 Atanas Mirchev, Baris Kayalibay, Patrick van der Smagt, Justin Bayer

We solve the problem of 6-DoF localisation and 3D dense reconstruction in spatial environments as approximate Bayesian inference in a deep state-space model.

Bayesian Inference Variational Inference

Learning Flat Latent Manifolds with VAEs

no code implementations ICML 2020 Nutan Chen, Alexej Klushyn, Francesco Ferroni, Justin Bayer, Patrick van der Smagt

Prevalent is the use of the Euclidean metric, which has the drawback of ignoring information about similarity of data stored in the decoder, as captured by the framework of Riemannian geometry.

Computational Efficiency

Variational Tracking and Prediction with Generative Disentangled State-Space Models

no code implementations14 Oct 2019 Adnan Akhundov, Maximilian Soelch, Justin Bayer, Patrick van der Smagt

We address tracking and prediction of multiple moving objects in visual data streams as inference and sampling in a disentangled latent state-space model.

Bayesian Inference Position

FLAT MANIFOLD VAES

no code implementations25 Sep 2019 Nutan Chen, Alexej Klushyn, Francesco Ferroni, Justin Bayer, Patrick van der Smagt

Latent-variable models represent observed data by mapping a prior distribution over some latent space to an observed space.

Increasing the Generalisation Capacity of Conditional VAEs

no code implementations23 Aug 2019 Alexej Klushyn, Nutan Chen, Botond Cseke, Justin Bayer, Patrick van der Smagt

We address the problem of one-to-many mappings in supervised learning, where a single instance has many different solutions of possibly equal cost.

Structured Prediction

On Deep Set Learning and the Choice of Aggregations

no code implementations18 Mar 2019 Maximilian Soelch, Adnan Akhundov, Patrick van der Smagt, Justin Bayer

Recently, it has been shown that many functions on sets can be represented by sum decompositions.

Bayesian Learning of Neural Network Architectures

1 code implementation14 Jan 2019 Georgi Dikov, Patrick van der Smagt, Justin Bayer

In this paper we propose a Bayesian method for estimating architectural parameters of neural networks, namely layer size and network depth.

Neural Architecture Search

Fast Approximate Geodesics for Deep Generative Models

no code implementations19 Dec 2018 Nutan Chen, Francesco Ferroni, Alexej Klushyn, Alexandros Paraschos, Justin Bayer, Patrick van der Smagt

The length of the geodesic between two data points along a Riemannian manifold, induced by a deep generative model, yields a principled measure of similarity.

Metrics for Deep Generative Models

no code implementations3 Nov 2017 Nutan Chen, Alexej Klushyn, Richard Kurle, Xueyan Jiang, Justin Bayer, Patrick van der Smagt

Neural samplers such as variational autoencoders (VAEs) or generative adversarial networks (GANs) approximate distributions by transforming samples from a simple random source---the latent space---to samples from a more complex distribution represented by a dataset.

Unsupervised Real-Time Control through Variational Empowerment

no code implementations13 Oct 2017 Maximilian Karl, Maximilian Soelch, Philip Becker-Ehmck, Djalel Benbouzid, Patrick van der Smagt, Justin Bayer

We introduce a methodology for efficiently computing a lower bound to empowerment, allowing it to be used as an unsupervised cost function for policy learning in real-time control.

Unsupervised preprocessing for Tactile Data

no code implementations23 Jun 2016 Maximilian Karl, Justin Bayer, Patrick van der Smagt

Tactile information is important for gripping, stable grasp, and in-hand manipulation, yet the complexity of tactile data prevents widespread use of such sensors.

reinforcement-learning Reinforcement Learning (RL)

ML-based tactile sensor calibration: A universal approach

no code implementations21 Jun 2016 Maximilian Karl, Artur Lohrer, Dhananjay Shah, Frederik Diehl, Max Fiedler, Saahil Ognawala, Justin Bayer, Patrick van der Smagt

We study the responses of two tactile sensors, the fingertip sensor from the iCub and the BioTac under different external stimuli.

Deep Variational Bayes Filters: Unsupervised Learning of State Space Models from Raw Data

4 code implementations20 May 2016 Maximilian Karl, Maximilian Soelch, Justin Bayer, Patrick van der Smagt

We introduce Deep Variational Bayes Filters (DVBF), a new method for unsupervised learning and identification of latent Markovian state space models.

Variational Inference

Theano: A Python framework for fast computation of mathematical expressions

1 code implementation9 May 2016 The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang

Since its introduction, it has been one of the most used CPU and GPU mathematical compilers - especially in the machine learning community - and has shown steady performance improvements.

BIG-bench Machine Learning Clustering +2

Efficient Empowerment

no code implementations28 Sep 2015 Maximilian Karl, Justin Bayer, Patrick van der Smagt

This is a natural candidate for an intrinsic reward signal in the context of reinforcement learning: the agent will place itself in a situation where its action have maximum stability and maximum influence on the future.

Fast Adaptive Weight Noise

no code implementations19 Jul 2015 Justin Bayer, Maximilian Karl, Daniela Korhammer, Patrick van der Smagt

Marginalising out uncertain quantities within the internal representations or parameters of neural networks is of central importance for a wide range of learning techniques, such as empirical, variational or full Bayesian methods.

Gaussian Processes

Learning Stochastic Recurrent Networks

1 code implementation27 Nov 2014 Justin Bayer, Christian Osendorfer

Leveraging advances in variational inference, we propose to enhance recurrent neural networks with latent variables, resulting in Stochastic Recurrent Networks (STORNs).

Variational Inference

Regularizing Recurrent Networks - On Injected Noise and Norm-based Methods

no code implementations21 Oct 2014 Saahil Ognawala, Justin Bayer

Advancements in parallel processing have lead to a surge in multilayer perceptrons' (MLP) applications and deep learning in the past decades.

Variational inference of latent state sequences using Recurrent Networks

no code implementations6 Jun 2014 Justin Bayer, Christian Osendorfer

Recent advances in the estimation of deep directed graphical models and recurrent networks let us contribute to the removal of a blind spot in the area of probabilistc modelling of time series.

Denoising Imputation +3

Convolutional Neural Networks learn compact local image descriptors

no code implementations30 Apr 2013 Christian Osendorfer, Justin Bayer, Patrick van der Smagt

A standard deep convolutional neural network paired with a suitable loss function learns compact local image descriptors that perform comparably to state-of-the art approaches.

Unsupervised Feature Learning for low-level Local Image Descriptors

no code implementations14 Jan 2013 Christian Osendorfer, Justin Bayer, Sebastian Urban, Patrick van der Smagt

Unsupervised feature learning has shown impressive results for a wide range of input modalities, in particular for object classification tasks in computer vision.

Binarization General Classification

Learning Sequence Neighbourhood Metrics

no code implementations9 Sep 2011 Justin Bayer, Christian Osendorfer, Patrick van der Smagt

Recurrent neural networks (RNNs) in combination with a pooling operator and the neighbourhood components analysis (NCA) objective function are able to detect the characterizing dynamics of sequences and embed them into a fixed-length vector space of arbitrary dimensionality.

General Classification Metric Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.