no code implementations • 31 May 2023 • Uğur Çoğalan, Mojtaba Bemana, Hans-Peter Seidel, Karol Myszkowski
Full-reference image quality metrics (FR-IQMs) aim to measure the visual differences between a pair of reference and distorted images, with the goal of accurately predicting human judgments.
1 code implementation • 4 Feb 2023 • Lingyan Ruan, Mojtaba Bemana, Hans-Peter Seidel, Karol Myszkowski, Bin Chen
In this work, we propose a unified lightweight CNN network that features a large effective receptive field (ERF) and demonstrates comparable or even better performance than Transformers while bearing less computational costs.
Ranked #2 on Image Defocus Deblurring on DPD
no code implementations • 19 Jun 2022 • Uğur Çoğalan, Mojtaba Bemana, Hans-Peter Seidel, Karol Myszkowski
Video frame interpolation (VFI) enables many important applications that might involve the temporal domain, such as slow motion playback, or the spatial domain, such as stop motion sequences.
no code implementations • 2 Feb 2022 • Mojtaba Bemana, Karol Myszkowski, Jeppe Revall Frisvad, Hans-Peter Seidel, Tobias Ritschel
We tackle the problem of generating novel-view images from collections of 2D images showing refractive and reflective objects.
no code implementations • 22 Dec 2020 • Uğur Çoğalan, Mojtaba Bemana, Karol Myszkowski, Hans-Peter Seidel, Tobias Ritschel
Regrettably, capturing DISTORTED sensor readings is time-consuming; as well, there is a lack of CLEAN HDR videos.
no code implementations • 1 Oct 2020 • Mojtaba Bemana, Karol Myszkowski, Hans-Peter Seidel, Tobias Ritschel
We suggest to represent an X-Field -a set of 2D images taken across different view, time or illumination conditions, i. e., video, light field, reflectance fields or combinations thereof-by learning a neural network (NN) to map their view, time or light coordinates to 2D images.
no code implementations • 30 Oct 2019 • Mojtaba Bemana, Karol Myszkowski, Hans-Peter Seidel, Tobias Ritschel
We suggest representing light field (LF) videos as "one-off" neural networks (NN), i. e., a learned mapping from view-plus-time coordinates to high-resolution color values, trained on sparse views.