Optical Flow Estimation
652 papers with code • 10 benchmarks • 33 datasets
Optical Flow Estimation is a computer vision task that involves computing the motion of objects in an image or a video sequence. The goal of optical flow estimation is to determine the movement of pixels or features in the image, which can be used for various applications such as object tracking, motion analysis, and video compression.
Approaches for optical flow estimation include correlation-based, block-matching, feature tracking, energy-based, and more recently gradient-based.
Further readings:
Definition source: Devon: Deformable Volume Network for Learning Optical Flow
Image credit: Optical Flow Estimation
Libraries
Use these libraries to find Optical Flow Estimation models and implementationsDatasets
Latest papers with no code
Chaos in Motion: Unveiling Robustness in Remote Heart Rate Measurement through Brain-Inspired Skin Tracking
To address these issues, we regard the remote heart rate measurement as the process of analyzing the spatiotemporal characteristics of the optical flow signal in the video.
SciFlow: Empowering Lightweight Optical Flow Models with Self-Cleaning Iterations
Optical flow estimation is crucial to a variety of vision tasks.
MemFlow: Optical Flow Estimation and Prediction with Memory
To this end, we present MemFlow, a real-time method for optical flow estimation and prediction with memory.
Salient Sparse Visual Odometry With Pose-Only Supervision
Visual Odometry (VO) is vital for the navigation of autonomous systems, providing accurate position and orientation estimates at reasonable costs.
LoSA: Long-Short-range Adapter for Scaling End-to-End Temporal Action Localization
Temporal Action Localization (TAL) involves localizing and classifying action snippets in an untrimmed video.
BadPart: Unified Black-box Adversarial Patch Attacks against Pixel-wise Regression Tasks
Our attack prototype, named BadPart, is evaluated on both MDE and OFE tasks, utilizing a total of 7 models.
FlowDepth: Decoupling Optical Flow for Self-Supervised Monocular Depth Estimation
To address these issues, existing approaches use additional semantic priori black-box networks to separate moving objects and improve the model only at the loss level.
$\mathrm{F^2Depth}$: Self-supervised Indoor Monocular Depth Estimation via Optical Flow Consistency and Feature Map Synthesis
To evaluate the generalization ability of our $\mathrm{F^2Depth}$, we collect a Campus Indoor depth dataset composed of approximately 1500 points selected from 99 images in 18 scenes.
OCAI: Improving Optical Flow Estimation by Occlusion and Consistency Aware Interpolation
We propose OCAI, a method that supports robust frame interpolation by generating intermediate video frames alongside optical flows in between.
AI-Generated Video Detection via Spatio-Temporal Anomaly Learning
The advancement of generation models has led to the emergence of highly realistic artificial intelligence (AI)-generated videos.