166 papers with code • 0 benchmarks • 9 datasets
Motion Estimation is used to determine the block-wise or pixel-wise motion vectors between two frames.
Source: MEMC-Net: Motion Estimation and Motion Compensation Driven Neural Network for Video Interpolation and Enhancement
These leaderboards are used to track progress in Motion Estimation
LibrariesUse these libraries to find Motion Estimation models and implementations
Most implemented papers
Digging Into Self-Supervised Monocular Depth Estimation
Per-pixel ground-truth depth data is challenging to acquire at scale.
Depth Prediction Without the Sensors: Leveraging Structure for Unsupervised Learning from Monocular Videos
Models and examples built with TensorFlow
On human motion prediction using recurrent neural networks
Human motion modelling is a classical problem at the intersection of graphics and computer vision, with applications spanning human-computer interaction, motion synthesis, and motion prediction for virtual and augmented reality.
The Double Sphere Camera Model
We evaluate the model using a calibration dataset with several different lenses and compare the models using the metrics that are relevant for Visual Odometry, i. e., reprojection error, as well as computation time for projection and unprojection functions and their Jacobians.
Visual-Inertial Mapping with Non-Linear Factor Recovery
We reconstruct a set of non-linear factors that make an optimal approximation of the information on the trajectory accumulated by VIO.
FastDVDnet: Towards Real-Time Deep Video Denoising Without Flow Estimation
In this paper, we propose a state-of-the-art video denoising algorithm based on a convolutional neural network architecture.
DeepVO: Towards End-to-End Visual Odometry with Deep Recurrent Convolutional Neural Networks
This paper presents a novel end-to-end framework for monocular VO by using deep Recurrent Convolutional Neural Networks (RCNNs).
Video Enhancement with Task-Oriented Flow
Many video enhancement algorithms rely on optical flow to register frames in a video sequence.
Castle in the Sky: Dynamic Sky Replacement and Harmonization in Videos
This paper proposes a vision-based method for video sky replacement and harmonization, which can automatically generate realistic and dramatic sky backgrounds in videos with controllable styles.
HP-GAN: Probabilistic 3D human motion prediction via GAN
Our model, which we call HP-GAN, learns a probability density function of future human poses conditioned on previous poses.