Video Reconstruction

29 papers with code • 7 benchmarks • 6 datasets

Source: Deep-SloMo

Most implemented papers

VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training

MCG-NJU/VideoMAE 23 Mar 2022

Pre-training video transformers on extra large-scale datasets is generally required to achieve premier performance on relatively small datasets.

First Order Motion Model for Image Animation

AliaksandrSiarohin/first-order-model NeurIPS 2019

To achieve this, we decouple appearance and motion information using a self-supervised formulation.

Motion Representations for Articulated Animation

snap-research/articulated-animation CVPR 2021

To facilitate animation and prevent the leakage of the shape of the driving object, we disentangle shape and pose of objects in the region space.

Layered Neural Atlases for Consistent Video Editing

ykasten/layered-neural-atlases 23 Sep 2021

We present a method that decomposes, or "unwraps", an input video into a set of layered 2D atlases, each providing a unified representation of the appearance of an object (or background) over the video.

NeRV: Neural Representations for Videos

haochen-rye/nerv NeurIPS 2021

In contrast, with NeRV, we can use any neural network compression method as a proxy for video compression, and achieve comparable performance to traditional frame-based video compression approaches (H. 264, HEVC \etc).

Joint Video Multi-Frame Interpolation and Deblurring under Unknown Exposure Time

shangwei5/vidue CVPR 2023

Moreover, on the seemingly implausible x16 interpolation task, our method outperforms existing methods by more than 1. 5 dB in terms of PSNR.

DeepBinaryMask: Learning a Binary Mask for Video Compressive Sensing

miliadis/DeepVideoCS 12 Jul 2016

In this paper, we propose a novel encoder-decoder neural network model referred to as DeepBinaryMask for video compressive sensing.

Bringing Alive Blurred Moments

anshulbshah/Blurred-Image-to-Video CVPR 2019

This network extracts embedded motion information from the blurred image to generate a sharp video in conjunction with the trained recurrent video decoder.

Exploiting Structure for Fast Kernel Learning

treforevans/gp_grid 9 Aug 2018

We propose two methods for exact Gaussian process (GP) inference and learning on massive image, video, spatial-temporal, or multi-output datasets with missing values (or "gaps") in the observed responses.

High Frame Rate Video Reconstruction based on an Event Camera

panpanfei/Bringing-a-Blurry-Frame-Alive-at-High-Frame-Rate-with-an-Event-Camera 12 Mar 2019

Based on the abundant event data alongside a low frame rate, easily blurred images, we propose a simple yet effective approach to reconstruct high-quality and high frame rate sharp videos.