1 code implementation • 27 Jan 2023 • Simon Ott, Konstantin Hebenstreit, Valentin Liévin, Christoffer Egeberg Hother, Milad Moradi, Maximilian Mayrhauser, Robert Praas, Ole Winther, Matthias Samwald
Large language models (LLMs) such as GPT-4 have recently demonstrated impressive results across a wide range of tasks.
2 code implementations • 23 Sep 2022 • Valentin Liévin, Andreas Geert Motzfeldt, Ida Riis Jensen, Ole Winther
Retrieval-augmented models have proven to be effective in natural language processing tasks, yet there remains a lack of research on their optimization using variational inference.
Ranked #4 on Multiple Choice Question Answering (MCQA) on MedMCQA
1 code implementation • 17 Jul 2022 • Valentin Liévin, Christoffer Egeberg Hother, Andreas Geert Motzfeldt, Ole Winther
Although large language models (LLMs) often produce impressive outputs, it remains unclear how they perform in real-world scenarios requiring strong reasoning skills and expert domain knowledge.
Ranked #5 on Question Answering on PubMedQA
Multiple-choice Multiple Choice Question Answering (MCQA) +3
no code implementations • 17 Mar 2022 • Darius Chira, Ilian Haralampiev, Ole Winther, Andrea Dittadi, Valentin Liévin
Image super-resolution (SR) techniques are used to generate a high-resolution image from a low-resolution image.
1 code implementation • NeurIPS 2020 • Valentin Liévin, Andrea Dittadi, Anders Christensen, Ole Winther
Empirically, for the training of both continuous and discrete generative models, the proposed method yields superior variance reduction, resulting in an SNR for IWAE that increases with $K$ without relying on the reparameterization trick.
1 code implementation • 5 Aug 2020 • Valentin Liévin, Andrea Dittadi, Anders Christensen, Ole Winther
This paper introduces novel results for the score function gradient estimator of the importance weighted variational bound (IWAE).
no code implementations • pproximateinference AABI Symposium 2019 • Valentin Liévin, Andrea Dittadi, Lars Maaløe, Ole Winther
We introduce the Hierarchical Discrete Variational Autoencoder (HD-VAE): a hi- erarchy of variational memory layers.
2 code implementations • NeurIPS 2019 • Lars Maaløe, Marco Fraccaro, Valentin Liévin, Ole Winther
In this paper we close the performance gap by constructing VAE models that can effectively utilize a deep hierarchy of stochastic variables and model complex covariance structures.
Ranked #18 on Image Generation on ImageNet 32x32 (bpd metric)