1 code implementation • 15 Dec 2022 • Royson Lee, Rui Li, Stylianos I. Venieris, Timothy Hospedales, Ferenc Huszár, Nicholas D. Lane
Recent image degradation estimation methods have enabled single-image super-resolution (SR) approaches to better upsample real-world images.
no code implementations • 19 Oct 2022 • Szilvia Ujváry, Zsigmond Telek, Anna Kerekes, Anna Mészáros, Ferenc Huszár
Sharpness-aware minimization (SAM) aims to improve the generalisation of gradient-based learning by seeking out flat minima.
no code implementations • 22 Nov 2021 • Anna Kerekes, Anna Mészáros, Ferenc Huszár
In gradient descent, changing how we parametrize the model can lead to drastically different optimization trajectories, giving rise to a surprising range of meaningful inductive biases: identifying sparse classifiers or reconstructing low-rank matrices without explicit regularization.
3 code implementations • NeurIPS 2018 • Iryna Korshunova, Jonas Degrave, Ferenc Huszár, Yarin Gal, Arthur Gretton, Joni Dambre
We present a novel model architecture which leverages deep learning tools to perform exact Bayesian inference on sets of high dimensional, complex observations.
2 code implementations • Twitter 2018 • Lucas Theis, Iryna Korshunova, Alykhan Tejani, Ferenc Huszár
Predicting human fixations from images has recently seen large improvements by leveraging deep representations which were pretrained for object recognition.
no code implementations • 11 Dec 2017 • Ferenc Huszár
Elastic weight consolidation (EWC, Kirkpatrick et al, 2017) is a novel algorithm designed to safeguard against catastrophic forgetting in neural networks.
4 code implementations • 1 Mar 2017 • Lucas Theis, Wenzhe Shi, Andrew Cunningham, Ferenc Huszár
We propose a new approach to the problem of optimizing autoencoders for lossy image compression.
no code implementations • 27 Feb 2017 • Ferenc Huszár
Generative adversarial networks (GANs) have given us a great tool to fit implicit generative models to data.
no code implementations • 14 Oct 2016 • Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, Ferenc Huszár
We show that, using this architecture, the amortised MAP inference problem reduces to minimising the cross-entropy between two distributions, similar to training generative models.
39 code implementations • CVPR 2016 • Wenzhe Shi, Jose Caballero, Ferenc Huszár, Johannes Totz, Andrew P. Aitken, Rob Bishop, Daniel Rueckert, Zehan Wang
This means that the super-resolution (SR) operation is performed in HR space.
Ranked #1 on Video Super-Resolution on Xiph HD - 4x upscaling
1 code implementation • 16 Nov 2015 • Ferenc Huszár
We introduce a generalisation of adversarial training, and show how such method can interpolate between maximum likelihood training and our ideal training objective.
1 code implementation • 7 Apr 2012 • Ferenc Huszár, David Duvenaud
We show that the criterion minimised when selecting samples in kernel herding is equivalent to the posterior variance in Bayesian quadrature.
2 code implementations • 24 Dec 2011 • Neil Houlsby, Ferenc Huszár, Zoubin Ghahramani, Máté Lengyel
Information theoretic active learning has been widely studied for probabilistic models.