Search Results for author: Ferenc Huszár

Found 14 papers, 7 papers with code

Meta-Learned Kernel For Blind Super-Resolution Kernel Estimation

no code implementations15 Dec 2022 Royson Lee, Rui Li, Stylianos I. Venieris, Timothy Hospedales, Ferenc Huszár, Nicholas D. Lane

Recent image degradation estimation methods have enabled single-image super-resolution (SR) approaches to better upsample real-world images.

Blind Super-Resolution Image Super-Resolution +1

Rethinking Sharpness-Aware Minimization as Variational Inference

no code implementations19 Oct 2022 Szilvia Ujváry, Zsigmond Telek, Anna Kerekes, Anna Mészáros, Ferenc Huszár

Sharpness-aware minimization (SAM) aims to improve the generalisation of gradient-based learning by seeking out flat minima.

Variational Inference

Causal de Finetti: On the Identification of Invariant Causal Structure in Exchangeable Data

no code implementations29 Mar 2022 Siyuan Guo, Viktor Tóth, Bernhard Schölkopf, Ferenc Huszár

It is known that under i.\, i.\, d assumption, even with infinite data, there is a limit to how fine-grained a causal structure we can identify.

Causal Inference

Depth Without the Magic: Inductive Bias of Natural Gradient Descent

no code implementations22 Nov 2021 Anna Kerekes, Anna Mészáros, Ferenc Huszár

In gradient descent, changing how we parametrize the model can lead to drastically different optimization trajectories, giving rise to a surprising range of meaningful inductive biases: identifying sparse classifiers or reconstructing low-rank matrices without explicit regularization.

Inductive Bias

BRUNO: A Deep Recurrent Model for Exchangeable Data

3 code implementations NeurIPS 2018 Iryna Korshunova, Jonas Degrave, Ferenc Huszár, Yarin Gal, Arthur Gretton, Joni Dambre

We present a novel model architecture which leverages deep learning tools to perform exact Bayesian inference on sets of high dimensional, complex observations.

Anomaly Detection Bayesian Inference +2

Faster gaze prediction with dense networks and Fisher pruning

2 code implementations Twitter 2018 Lucas Theis, Iryna Korshunova, Alykhan Tejani, Ferenc Huszár

Predicting human fixations from images has recently seen large improvements by leveraging deep representations which were pretrained for object recognition.

Gaze Estimation Gaze Prediction +3

On Quadratic Penalties in Elastic Weight Consolidation

no code implementations11 Dec 2017 Ferenc Huszár

Elastic weight consolidation (EWC, Kirkpatrick et al, 2017) is a novel algorithm designed to safeguard against catastrophic forgetting in neural networks.

Lossy Image Compression with Compressive Autoencoders

4 code implementations1 Mar 2017 Lucas Theis, Wenzhe Shi, Andrew Cunningham, Ferenc Huszár

We propose a new approach to the problem of optimizing autoencoders for lossy image compression.

Image Compression

Variational Inference using Implicit Distributions

no code implementations27 Feb 2017 Ferenc Huszár

Generative adversarial networks (GANs) have given us a great tool to fit implicit generative models to data.

Denoising Density Ratio Estimation +2

Amortised MAP Inference for Image Super-resolution

1 code implementation14 Oct 2016 Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, Ferenc Huszár

We show that, using this architecture, the amortised MAP inference problem reduces to minimising the cross-entropy between two distributions, similar to training generative models.

Denoising Image Super-Resolution +1

How (not) to Train your Generative Model: Scheduled Sampling, Likelihood, Adversary?

no code implementations16 Nov 2015 Ferenc Huszár

We introduce a generalisation of adversarial training, and show how such method can interpolate between maximum likelihood training and our ideal training objective.

Image Captioning

Optimally-Weighted Herding is Bayesian Quadrature

1 code implementation7 Apr 2012 Ferenc Huszár, David Duvenaud

We show that the criterion minimised when selecting samples in kernel herding is equivalent to the posterior variance in Bayesian quadrature.

Cannot find the paper you are looking for? You can Submit a new open access paper.