6 code implementations • ICCV 2019 • Egor Zakharov, Aliaksandra Shysheya, Egor Burkov, Victor Lempitsky
In order to create a personalized talking head model, these works require training on a large dataset of images of a single person.
1 code implementation • ECCV 2020 • Egor Zakharov, Aleksei Ivakhnenko, Aliaksandra Shysheya, Victor Lempitsky
The texture image is generated offline, warped and added to the coarse image to ensure a high effective resolution of synthesized head views.
1 code implementation • 20 Jun 2022 • Massimiliano Patacchiola, John Bronskill, Aliaksandra Shysheya, Katja Hofmann, Sebastian Nowozin, Richard E. Turner
In this paper we push this Pareto frontier in the few-shot image classification setting with a key contribution: a new adaptive block called Contextual Squeeze-and-Excitation (CaSE) that adjusts a pretrained neural network on a new task to significantly improve performance with a single forward pass of the user data (context).
Ranked #3 on Few-Shot Image Classification on Meta-Dataset
1 code implementation • 17 Jun 2022 • Aliaksandra Shysheya, John Bronskill, Massimiliano Patacchiola, Sebastian Nowozin, Richard E Turner
Modern deep learning systems are increasingly deployed in situations such as personalization and federated learning where it is necessary to support i) learning on small amounts of data, and ii) communication efficient distributed training protocols.
1 code implementation • 2 Feb 2023 • Marlon Tobaben, Aliaksandra Shysheya, John Bronskill, Andrew Paverd, Shruti Tople, Santiago Zanella-Beguelin, Richard E Turner, Antti Honkela
There has been significant recent progress in training differentially private (DP) models which achieve accuracy that approaches the best non-private models.
no code implementations • CVPR 2019 • Aliaksandra Shysheya, Egor Zakharov, Kara-Ali Aliev, Renat Bashirov, Egor Burkov, Karim Iskakov, Aleksei Ivakhnenko, Yury Malkov, Igor Pasechnik, Dmitry Ulyanov, Alexander Vakhitov, Victor Lempitsky
In particular, our system estimates an explicit two-dimensional texture map of the model surface.
no code implementations • 16 Nov 2023 • Lorenzo Bonito, James Requeima, Aliaksandra Shysheya, Richard E. Turner
Over the last few years, Neural Processes have become a useful modelling tool in many application areas, such as healthcare and climate sciences, in which data are scarce and prediction uncertainty estimates are indispensable.
no code implementations • 3 Jan 2024 • Massimiliano Patacchiola, Aliaksandra Shysheya, Katja Hofmann, Richard E. Turner
In this paper, we propose a novel solution to these challenges by exploiting transformers to define a new class of neural flows called Transformer Neural Autoregressive Flows (T-NAFs).
no code implementations • 6 Feb 2024 • Richard E. Turner, Cristiana-Diana Diaconu, Stratis Markou, Aliaksandra Shysheya, Andrew Y. K. Foong, Bruno Mlodozeniec
Denoising Diffusion Probabilistic Models (DDPMs) are a very popular class of deep generative model that have been successfully applied to a diverse range of problems including image and video generation, protein and material synthesis, weather forecasting, and neural surrogates of partial differential equations.