Search Results for author: Mojtaba Bemana

Found 7 papers, 1 papers with code

Enhancing image quality prediction with self-supervised visual masking

no code implementations31 May 2023 Uğur Çoğalan, Mojtaba Bemana, Hans-Peter Seidel, Karol Myszkowski

Full-reference image quality metrics (FR-IQMs) aim to measure the visual differences between a pair of reference and distorted images, with the goal of accurately predicting human judgments.

SSIM

Revisiting Image Deblurring with an Efficient ConvNet

1 code implementation4 Feb 2023 Lingyan Ruan, Mojtaba Bemana, Hans-Peter Seidel, Karol Myszkowski, Bin Chen

In this work, we propose a unified lightweight CNN network that features a large effective receptive field (ERF) and demonstrates comparable or even better performance than Transformers while bearing less computational costs.

Attribute Deblurring +2

Video frame interpolation for high dynamic range sequences captured with dual-exposure sensors

no code implementations19 Jun 2022 Uğur Çoğalan, Mojtaba Bemana, Hans-Peter Seidel, Karol Myszkowski

Video frame interpolation (VFI) enables many important applications that might involve the temporal domain, such as slow motion playback, or the spatial domain, such as stop motion sequences.

Video Frame Interpolation

Eikonal Fields for Refractive Novel-View Synthesis

no code implementations2 Feb 2022 Mojtaba Bemana, Karol Myszkowski, Jeppe Revall Frisvad, Hans-Peter Seidel, Tobias Ritschel

We tackle the problem of generating novel-view images from collections of 2D images showing refractive and reflective objects.

Novel View Synthesis

X-Fields: Implicit Neural View-, Light- and Time-Image Interpolation

no code implementations1 Oct 2020 Mojtaba Bemana, Karol Myszkowski, Hans-Peter Seidel, Tobias Ritschel

We suggest to represent an X-Field -a set of 2D images taken across different view, time or illumination conditions, i. e., video, light field, reflectance fields or combinations thereof-by learning a neural network (NN) to map their view, time or light coordinates to 2D images.

Neural View-Interpolation for Sparse Light Field Video

no code implementations30 Oct 2019 Mojtaba Bemana, Karol Myszkowski, Hans-Peter Seidel, Tobias Ritschel

We suggest representing light field (LF) videos as "one-off" neural networks (NN), i. e., a learned mapping from view-plus-time coordinates to high-resolution color values, trained on sparse views.

Cannot find the paper you are looking for? You can Submit a new open access paper.