1 code implementation • 27 Feb 2023 • Robin Ruff, Patrick Reiser, Jan Stühmer, Pascal Friederich
Graph neural networks (GNNs) have been applied to a large variety of applications in materials science and chemistry.
no code implementations • 17 Jul 2022 • Ruchika Chavhan, Henry Gouk, Jan Stühmer, Timothy Hospedales
Providing invariances in a given learning task conveys a key inductive bias that can lead to sample-efficient learning and good generalisation, if correctly specified.
1 code implementation • CVPR 2022 • Shell Xu Hu, Da Li, Jan Stühmer, Minyoung Kim, Timothy M. Hospedales
To this end, we explore few-shot learning from the perspective of neural network architecture, as well as a three stage pipeline of network updates under different data supplies, where unsupervised external data is considered for pre-training, base categories are used to simulate few-shot tasks for meta-training, and the scarcely labelled data of an novel task is taken for fine-tuning.
Ranked #1 on Few-Shot Image Classification on Meta-Dataset
1 code implementation • 25 Aug 2020 • Dorin Ungureanu, Federica Bogo, Silvano Galliani, Pooja Sama, Xin Duan, Casey Meekhof, Jan Stühmer, Thomas J. Cashman, Bugra Tekin, Johannes L. Schönberger, Pawel Olszta, Marc Pollefeys
Mixed reality headsets, such as the Microsoft HoloLens 2, are powerful sensing devices with integrated compute capabilities, which makes it an ideal platform for computer vision research.
no code implementations • 5 Sep 2019 • Jan Stühmer, Richard E. Turner, Sebastian Nowozin
Second, we demonstrate that the proposed prior encourages a disentangled latent representation which facilitates learning of disentangled representations.
no code implementations • ICLR 2019 • Jan Stühmer, Richard Turner, Sebastian Nowozin
Extensive quantitative and qualitative experiments demonstrate that the proposed prior mitigates the trade-off introduced by modified cost functions like beta-VAE and TCVAE between reconstruction loss and disentanglement.
no code implementations • 23 May 2018 • Sebastian Tschiatschek, Kai Arulkumaran, Jan Stühmer, Katja Hofmann
In this paper we propose DELIP, an approach to model learning for POMDPs that utilizes amortized structured variational inference.