Optical Flow Estimation
655 papers with code • 10 benchmarks • 34 datasets
Optical Flow Estimation is a computer vision task that involves computing the motion of objects in an image or a video sequence. The goal of optical flow estimation is to determine the movement of pixels or features in the image, which can be used for various applications such as object tracking, motion analysis, and video compression.
Approaches for optical flow estimation include correlation-based, block-matching, feature tracking, energy-based, and more recently gradient-based.
Further readings:
Definition source: Devon: Deformable Volume Network for Learning Optical Flow
Image credit: Optical Flow Estimation
Libraries
Use these libraries to find Optical Flow Estimation models and implementationsDatasets
Latest papers with no code
$\mathrm{F^2Depth}$: Self-supervised Indoor Monocular Depth Estimation via Optical Flow Consistency and Feature Map Synthesis
To evaluate the generalization ability of our $\mathrm{F^2Depth}$, we collect a Campus Indoor depth dataset composed of approximately 1500 points selected from 99 images in 18 scenes.
OCAI: Improving Optical Flow Estimation by Occlusion and Consistency Aware Interpolation
We propose OCAI, a method that supports robust frame interpolation by generating intermediate video frames alongside optical flows in between.
AI-Generated Video Detection via Spatio-Temporal Anomaly Learning
The advancement of generation models has led to the emergence of highly realistic artificial intelligence (AI)-generated videos.
Emotion Recognition from the perspective of Activity Recognition
In this paper, we treat emotion recognition from the perspective of action recognition by exploring the application of deep learning architectures specifically designed for action recognition, for continuous affect recognition.
DS-NeRV: Implicit Neural Video Representation with Decomposed Static and Dynamic Codes
Implicit neural representations for video (NeRV) have recently become a novel way for high-quality video representation.
CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers
In minimally invasive endovascular procedures, contrast-enhanced angiography remains the most robust imaging technique.
S2DM: Sector-Shaped Diffusion Models for Video Generation
For text-to-video generation tasks where temporal conditions are not explicitly given, we propose a two-stage generation strategy which can decouple the generation of temporal features from semantic-content features.
GaussianFlow: Splatting Gaussian Dynamics for 4D Content Creation
While the optimization can draw photometric reference from the input videos or be regulated by generative models, directly supervising Gaussian motions remains underexplored.
TAPTR: Tracking Any Point with Transformers as Detection
Based on the observation that point tracking bears a great resemblance to object detection and tracking, we borrow designs from DETR-like algorithms to address the task of TAP.
GenFlow: Generalizable Recurrent Flow for 6D Pose Refinement of Novel Objects
Despite the progress of learning-based methods for 6D object pose estimation, the trade-off between accuracy and scalability for novel objects still exists.