Optical Flow Estimation

655 papers with code • 10 benchmarks • 34 datasets

Optical Flow Estimation is a computer vision task that involves computing the motion of objects in an image or a video sequence. The goal of optical flow estimation is to determine the movement of pixels or features in the image, which can be used for various applications such as object tracking, motion analysis, and video compression.

Approaches for optical flow estimation include correlation-based, block-matching, feature tracking, energy-based, and more recently gradient-based.

Further readings:

Definition source: Devon: Deformable Volume Network for Learning Optical Flow

Image credit: Optical Flow Estimation

Libraries

Use these libraries to find Optical Flow Estimation models and implementations
9 papers
895
5 papers
129
5 papers
128

Latest papers with no code

$\mathrm{F^2Depth}$: Self-supervised Indoor Monocular Depth Estimation via Optical Flow Consistency and Feature Map Synthesis

no code yet • 27 Mar 2024

To evaluate the generalization ability of our $\mathrm{F^2Depth}$, we collect a Campus Indoor depth dataset composed of approximately 1500 points selected from 99 images in 18 scenes.

OCAI: Improving Optical Flow Estimation by Occlusion and Consistency Aware Interpolation

no code yet • 26 Mar 2024

We propose OCAI, a method that supports robust frame interpolation by generating intermediate video frames alongside optical flows in between.

AI-Generated Video Detection via Spatio-Temporal Anomaly Learning

no code yet • 25 Mar 2024

The advancement of generation models has led to the emergence of highly realistic artificial intelligence (AI)-generated videos.

Emotion Recognition from the perspective of Activity Recognition

no code yet • 24 Mar 2024

In this paper, we treat emotion recognition from the perspective of action recognition by exploring the application of deep learning architectures specifically designed for action recognition, for continuous affect recognition.

DS-NeRV: Implicit Neural Video Representation with Decomposed Static and Dynamic Codes

no code yet • 23 Mar 2024

Implicit neural representations for video (NeRV) have recently become a novel way for high-quality video representation.

CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers

no code yet • 21 Mar 2024

In minimally invasive endovascular procedures, contrast-enhanced angiography remains the most robust imaging technique.

S2DM: Sector-Shaped Diffusion Models for Video Generation

no code yet • 20 Mar 2024

For text-to-video generation tasks where temporal conditions are not explicitly given, we propose a two-stage generation strategy which can decouple the generation of temporal features from semantic-content features.

GaussianFlow: Splatting Gaussian Dynamics for 4D Content Creation

no code yet • 19 Mar 2024

While the optimization can draw photometric reference from the input videos or be regulated by generative models, directly supervising Gaussian motions remains underexplored.

TAPTR: Tracking Any Point with Transformers as Detection

no code yet • 19 Mar 2024

Based on the observation that point tracking bears a great resemblance to object detection and tracking, we borrow designs from DETR-like algorithms to address the task of TAP.

GenFlow: Generalizable Recurrent Flow for 6D Pose Refinement of Novel Objects

no code yet • 18 Mar 2024

Despite the progress of learning-based methods for 6D object pose estimation, the trade-off between accuracy and scalability for novel objects still exists.