Search Results for author: Fredrik K. Gustafsson

Found 14 papers, 14 papers with code

Photo-Realistic Image Restoration in the Wild with Controlled Vision-Language Models

2 code implementations15 Apr 2024 Ziwei Luo, Fredrik K. Gustafsson, Zheng Zhao, Jens Sjölund, Thomas B. Schön

Though diffusion models have been successfully applied to various image restoration (IR) tasks, their performance is sensitive to the choice of training datasets.

Image Generation Language Modelling +1

Controlling Vision-Language Models for Multi-Task Image Restoration

1 code implementation2 Oct 2023 Ziwei Luo, Fredrik K. Gustafsson, Zheng Zhao, Jens Sjölund, Thomas B. Schön

In this paper, we present a degradation-aware vision-language model (DA-CLIP) to better transfer pretrained vision-language models to low-level vision tasks as a multi-task framework for image restoration.

Image Dehazing Image Denoising +8

How Reliable is Your Regression Model's Uncertainty Under Real-World Distribution Shifts?

1 code implementation7 Feb 2023 Fredrik K. Gustafsson, Martin Danelljan, Thomas B. Schön

We then employ our benchmark to evaluate many of the most common uncertainty estimation methods, as well as two state-of-the-art uncertainty scores from the task of out-of-distribution detection.

Out-of-Distribution Detection regression

Learning Proposals for Practical Energy-Based Regression

1 code implementation22 Oct 2021 Fredrik K. Gustafsson, Martin Danelljan, Thomas B. Schön

Energy-based models (EBMs) have experienced a resurgence within machine learning in recent years, including as a promising alternative for probabilistic regression.

regression

Uncertainty-Aware Body Composition Analysis with Deep Regression Ensembles on UK Biobank MRI

1 code implementation18 Jan 2021 Taro Langner, Fredrik K. Gustafsson, Benny Avelin, Robin Strand, Håkan Ahlström, Joel Kullberg

The results indicate that deep regression ensembles could ultimately provide automated, uncertainty-aware measurements of body composition for more than 120, 000 UK Biobank neck-to-knee body MRI that are to be acquired within the coming years.

regression Uncertainty Quantification

Accurate 3D Object Detection using Energy-Based Models

1 code implementation8 Dec 2020 Fredrik K. Gustafsson, Martin Danelljan, Thomas B. Schön

On the KITTI dataset, our proposed approach consistently outperforms the SA-SSD baseline across all 3DOD metrics, demonstrating the potential of EBM-based regression for highly accurate 3DOD.

3D Object Detection Object +2

Deep Energy-Based NARX Models

1 code implementation8 Dec 2020 Johannes N. Hendriks, Fredrik K. Gustafsson, Antônio H. Ribeiro, Adrian G. Wills, Thomas B. Schön

This paper is directed towards the problem of learning nonlinear ARX models based on system input--output data.

How to Train Your Energy-Based Model for Regression

1 code implementation4 May 2020 Fredrik K. Gustafsson, Martin Danelljan, Radu Timofte, Thomas B. Schön

While they are commonly employed for generative image modeling, recent work has applied EBMs also for regression tasks, achieving state-of-the-art performance on object detection and visual tracking.

object-detection Object Detection +3

Energy-Based Models for Deep Probabilistic Regression

1 code implementation ECCV 2020 Fredrik K. Gustafsson, Martin Danelljan, Goutam Bhat, Thomas B. Schön

In our proposed approach, we create an energy-based model of the conditional target density p(y|x), using a deep neural network to predict the un-normalized density from (x, y).

 Ranked #1 on Object Detection on COCO test-dev (Hardware Burden metric)

Head Pose Estimation object-detection +4

Evaluating Scalable Bayesian Deep Learning Methods for Robust Computer Vision

1 code implementation4 Jun 2019 Fredrik K. Gustafsson, Martin Danelljan, Thomas B. Schön

We therefore accept this task and propose a comprehensive evaluation framework for scalable epistemic uncertainty estimation methods in deep learning.

Depth Completion Semantic Segmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.