Video Reconstruction

16 papers with code • 6 benchmarks • 5 datasets

Source: Deep-SloMo

Most implemented papers

First Order Motion Model for Image Animation

AliaksandrSiarohin/first-order-model NeurIPS 2019

To achieve this, we decouple appearance and motion information using a self-supervised formulation.

Motion Representations for Articulated Animation

snap-research/articulated-animation CVPR 2021

To facilitate animation and prevent the leakage of the shape of the driving object, we disentangle shape and pose of objects in the region space.

Layered Neural Atlases for Consistent Video Editing

ykasten/layered-neural-atlases 23 Sep 2021

We present a method that decomposes, or "unwraps", an input video into a set of layered 2D atlases, each providing a unified representation of the appearance of an object (or background) over the video.

DeepBinaryMask: Learning a Binary Mask for Video Compressive Sensing

miliadis/DeepVideoCS 12 Jul 2016

In this paper, we propose a novel encoder-decoder neural network model referred to as DeepBinaryMask for video compressive sensing.

Bringing Alive Blurred Moments

anshulbshah/Blurred-Image-to-Video CVPR 2019

This network extracts embedded motion information from the blurred image to generate a sharp video in conjunction with the trained recurrent video decoder.

Exploiting Structure for Fast Kernel Learning

treforevans/gp_grid 9 Aug 2018

We propose two methods for exact Gaussian process (GP) inference and learning on massive image, video, spatial-temporal, or multi-output datasets with missing values (or "gaps") in the observed responses.

High Frame Rate Video Reconstruction based on an Event Camera

panpanfei/Bringing-a-Blurry-Frame-Alive-at-High-Frame-Rate-with-an-Event-Camera 12 Mar 2019

Based on the abundant event data alongside a low frame rate, easily blurred images, we propose a simple yet effective approach to reconstruct high-quality and high frame rate sharp videos.

Deep Slow Motion Video Reconstruction with Hybrid Imaging System

avinashpaliwal/Deep-SloMo 27 Feb 2020

In this paper, we address this problem using two video streams as input; an auxiliary video with high frame rate and low spatial resolution, providing temporal information, in addition to the standard main video with low frame rate and high spatial resolution.

Reducing the Sim-to-Real Gap for Event Cameras

TimoStoff/event_cnn_minimal ECCV 2020

We present strategies for improving training data for event based CNNs that result in 20-40% boost in performance of existing state-of-the-art (SOTA) video reconstruction networks retrained with our method, and up to 15% for optic flow networks.

Video Reconstruction by Spatio-Temporal Fusion of Blurred-Coded Image Pair

asprasan/codedblurred 20 Oct 2020

The input to our algorithm is a fully-exposed and coded image pair.