Video Reconstruction
16 papers with code • 6 benchmarks • 5 datasets
Source: Deep-SloMo
Most implemented papers
First Order Motion Model for Image Animation
To achieve this, we decouple appearance and motion information using a self-supervised formulation.
Motion Representations for Articulated Animation
To facilitate animation and prevent the leakage of the shape of the driving object, we disentangle shape and pose of objects in the region space.
Layered Neural Atlases for Consistent Video Editing
We present a method that decomposes, or "unwraps", an input video into a set of layered 2D atlases, each providing a unified representation of the appearance of an object (or background) over the video.
DeepBinaryMask: Learning a Binary Mask for Video Compressive Sensing
In this paper, we propose a novel encoder-decoder neural network model referred to as DeepBinaryMask for video compressive sensing.
Bringing Alive Blurred Moments
This network extracts embedded motion information from the blurred image to generate a sharp video in conjunction with the trained recurrent video decoder.
Exploiting Structure for Fast Kernel Learning
We propose two methods for exact Gaussian process (GP) inference and learning on massive image, video, spatial-temporal, or multi-output datasets with missing values (or "gaps") in the observed responses.
High Frame Rate Video Reconstruction based on an Event Camera
Based on the abundant event data alongside a low frame rate, easily blurred images, we propose a simple yet effective approach to reconstruct high-quality and high frame rate sharp videos.
Deep Slow Motion Video Reconstruction with Hybrid Imaging System
In this paper, we address this problem using two video streams as input; an auxiliary video with high frame rate and low spatial resolution, providing temporal information, in addition to the standard main video with low frame rate and high spatial resolution.
Reducing the Sim-to-Real Gap for Event Cameras
We present strategies for improving training data for event based CNNs that result in 20-40% boost in performance of existing state-of-the-art (SOTA) video reconstruction networks retrained with our method, and up to 15% for optic flow networks.
Video Reconstruction by Spatio-Temporal Fusion of Blurred-Coded Image Pair
The input to our algorithm is a fully-exposed and coded image pair.