no code implementations • 20 Apr 2023 • Baris Kayalibay, Atanas Mirchev, Ahmed Agha, Patrick van der Smagt, Justin Bayer
Partially-observable problems pose a trade-off between reducing costs and gathering information.
no code implementations • 6 Dec 2022 • Atanas Mirchev, Baris Kayalibay, Ahmed Agha, Patrick van der Smagt, Daniel Cremers, Justin Bayer
We introduce PRISM, a method for real-time filtering in a probabilistic generative model of agent motion and visual perception.
no code implementations • 25 Jan 2022 • Baris Kayalibay, Atanas Mirchev, Patrick van der Smagt, Justin Bayer
We introduce a method for real-time navigation and tracking with differentiably rendered world models.
no code implementations • ICLR Workshop SSL-RL 2021 • Baris Kayalibay, Atanas Mirchev, Patrick van der Smagt, Justin Bayer
We examine the effect of the conditioning gap on model-based reinforcement learning with variational world models.
Model-based Reinforcement Learning reinforcement-learning +2
no code implementations • ICLR 2021 • Justin Bayer, Maximilian Soelch, Atanas Mirchev, Baris Kayalibay, Patrick van der Smagt
Amortised inference enables scalable learning of sequential latent-variable models (LVMs) with the evidence lower bound (ELBO).
no code implementations • ICLR 2021 • Atanas Mirchev, Baris Kayalibay, Patrick van der Smagt, Justin Bayer
We solve the problem of 6-DoF localisation and 3D dense reconstruction in spatial environments as approximate Bayesian inference in a deep state-space model.
no code implementations • ICML 2020 • Nutan Chen, Alexej Klushyn, Francesco Ferroni, Justin Bayer, Patrick van der Smagt
Prevalent is the use of the Euclidean metric, which has the drawback of ignoring information about similarity of data stored in the decoder, as captured by the framework of Riemannian geometry.
no code implementations • 14 Oct 2019 • Adnan Akhundov, Maximilian Soelch, Justin Bayer, Patrick van der Smagt
We address tracking and prediction of multiple moving objects in visual data streams as inference and sampling in a disentangled latent state-space model.
no code implementations • 25 Sep 2019 • Nutan Chen, Alexej Klushyn, Francesco Ferroni, Justin Bayer, Patrick van der Smagt
Latent-variable models represent observed data by mapping a prior distribution over some latent space to an observed space.
no code implementations • 23 Aug 2019 • Alexej Klushyn, Nutan Chen, Botond Cseke, Justin Bayer, Patrick van der Smagt
We address the problem of one-to-many mappings in supervised learning, where a single instance has many different solutions of possibly equal cost.
no code implementations • 18 Mar 2019 • Maximilian Soelch, Adnan Akhundov, Patrick van der Smagt, Justin Bayer
Recently, it has been shown that many functions on sets can be represented by sum decompositions.
1 code implementation • 14 Jan 2019 • Georgi Dikov, Patrick van der Smagt, Justin Bayer
In this paper we propose a Bayesian method for estimating architectural parameters of neural networks, namely layer size and network depth.
no code implementations • 19 Dec 2018 • Nutan Chen, Francesco Ferroni, Alexej Klushyn, Alexandros Paraschos, Justin Bayer, Patrick van der Smagt
The length of the geodesic between two data points along a Riemannian manifold, induced by a deep generative model, yields a principled measure of similarity.
no code implementations • 18 May 2018 • Atanas Mirchev, Baris Kayalibay, Maximilian Soelch, Patrick van der Smagt, Justin Bayer
Model-based approaches bear great promise for decision making of agents interacting with the physical world.
no code implementations • 3 Nov 2017 • Nutan Chen, Alexej Klushyn, Richard Kurle, Xueyan Jiang, Justin Bayer, Patrick van der Smagt
Neural samplers such as variational autoencoders (VAEs) or generative adversarial networks (GANs) approximate distributions by transforming samples from a simple random source---the latent space---to samples from a more complex distribution represented by a dataset.
no code implementations • 13 Oct 2017 • Maximilian Karl, Maximilian Soelch, Philip Becker-Ehmck, Djalel Benbouzid, Patrick van der Smagt, Justin Bayer
We introduce a methodology for efficiently computing a lower bound to empowerment, allowing it to be used as an unsupervised cost function for policy learning in real-time control.
no code implementations • 23 Jun 2016 • Maximilian Karl, Justin Bayer, Patrick van der Smagt
Tactile information is important for gripping, stable grasp, and in-hand manipulation, yet the complexity of tactile data prevents widespread use of such sensors.
no code implementations • 21 Jun 2016 • Maximilian Karl, Artur Lohrer, Dhananjay Shah, Frederik Diehl, Max Fiedler, Saahil Ognawala, Justin Bayer, Patrick van der Smagt
We study the responses of two tactile sensors, the fingertip sensor from the iCub and the BioTac under different external stimuli.
4 code implementations • 20 May 2016 • Maximilian Karl, Maximilian Soelch, Justin Bayer, Patrick van der Smagt
We introduce Deep Variational Bayes Filters (DVBF), a new method for unsupervised learning and identification of latent Markovian state space models.
1 code implementation • 9 May 2016 • The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang
Since its introduction, it has been one of the most used CPU and GPU mathematical compilers - especially in the machine learning community - and has shown steady performance improvements.
no code implementations • 23 Feb 2016 • Maximilian Soelch, Justin Bayer, Marvin Ludersdorfer, Patrick van der Smagt
Approximate variational inference has shown to be a powerful tool for modeling unknown complex probability distributions.
no code implementations • 28 Sep 2015 • Maximilian Karl, Justin Bayer, Patrick van der Smagt
This is a natural candidate for an intrinsic reward signal in the context of reinforcement learning: the agent will place itself in a situation where its action have maximum stability and maximum influence on the future.
no code implementations • 19 Jul 2015 • Justin Bayer, Maximilian Karl, Daniela Korhammer, Patrick van der Smagt
Marginalising out uncertain quantities within the internal representations or parameters of neural networks is of central importance for a wide range of learning techniques, such as empirical, variational or full Bayesian methods.
1 code implementation • 27 Nov 2014 • Justin Bayer, Christian Osendorfer
Leveraging advances in variational inference, we propose to enhance recurrent neural networks with latent variables, resulting in Stochastic Recurrent Networks (STORNs).
no code implementations • 21 Oct 2014 • Saahil Ognawala, Justin Bayer
Advancements in parallel processing have lead to a surge in multilayer perceptrons' (MLP) applications and deep learning in the past decades.
no code implementations • 6 Jun 2014 • Justin Bayer, Christian Osendorfer
Recent advances in the estimation of deep directed graphical models and recurrent networks let us contribute to the removal of a blind spot in the area of probabilistc modelling of time series.
1 code implementation • 4 Nov 2013 • Justin Bayer, Christian Osendorfer, Daniela Korhammer, Nutan Chen, Sebastian Urban, Patrick van der Smagt
Recurrent Neural Networks (RNNs) are rich models for the processing of sequential data.
no code implementations • 30 Apr 2013 • Christian Osendorfer, Justin Bayer, Patrick van der Smagt
A standard deep convolutional neural network paired with a suitable loss function learns compact local image descriptors that perform comparably to state-of-the art approaches.
no code implementations • 14 Jan 2013 • Christian Osendorfer, Justin Bayer, Sebastian Urban, Patrick van der Smagt
Unsupervised feature learning has shown impressive results for a wide range of input modalities, in particular for object classification tasks in computer vision.
no code implementations • 9 Sep 2011 • Justin Bayer, Christian Osendorfer, Patrick van der Smagt
Recurrent neural networks (RNNs) in combination with a pooling operator and the neighbourhood components analysis (NCA) objective function are able to detect the characterizing dynamics of sequences and embed them into a fixed-length vector space of arbitrary dimensionality.