Optical Flow Estimation
658 papers with code • 10 benchmarks • 34 datasets
Optical Flow Estimation is a computer vision task that involves computing the motion of objects in an image or a video sequence. The goal of optical flow estimation is to determine the movement of pixels or features in the image, which can be used for various applications such as object tracking, motion analysis, and video compression.
Approaches for optical flow estimation include correlation-based, block-matching, feature tracking, energy-based, and more recently gradient-based.
Further readings:
Definition source: Devon: Deformable Volume Network for Learning Optical Flow
Image credit: Optical Flow Estimation
Libraries
Use these libraries to find Optical Flow Estimation models and implementationsDatasets
Latest papers
SEVD: Synthetic Event-based Vision Dataset for Ego and Fixed Traffic Perception
In response to this gap, we present SEVD, a first-of-its-kind multi-view ego, and fixed perception synthetic event-based dataset using multiple dynamic vision sensors within the CARLA simulator.
DBA-Fusion: Tightly Integrating Deep Dense Visual Bundle Adjustment with Multiple Sensors for Large-Scale Localization and Mapping
Visual simultaneous localization and mapping (VSLAM) has broad applications, with state-of-the-art methods leveraging deep neural networks for better robustness and applicability.
NeuFlow: Real-time, High-accuracy Optical Flow Estimation on Robots Using Edge Devices
Given the features of the input images extracted at different spatial resolutions, global matching is employed to estimate an initial optical flow on the 1/16 resolution, capturing large displacement, which is then refined on the 1/8 resolution with lightweight CNN layers for better accuracy.
Rethinking Low-quality Optical Flow in Unsupervised Surgical Instrument Segmentation
Video-based surgical instrument segmentation plays an important role in robot-assisted surgeries.
LSTP: Language-guided Spatial-Temporal Prompt Learning for Long-form Video-Text Understanding
Despite progress in video-language modeling, the computational challenge of interpreting long-form videos in response to task-specific linguistic queries persists, largely due to the complexity of high-dimensional video data and the misalignment between language and visual cues over space and time.
CREMA: Multimodal Compositional Video Reasoning via Efficient Modular Adaptation and Fusion
Furthermore, we propose a fusion module designed to compress multimodal queries, maintaining computational efficiency in the LLM while combining additional modalities.
Taylor Videos for Action Recognition
Addressing these challenges, we propose the Taylor video, a new video format that highlights the dominate motions (e. g., a waving hand) in each of its frames named the Taylor frame.
Recurrent Partial Kernel Network for Efficient Optical Flow Estimation
However, this impacts the widespread adoption of optical flow methods and makes it harder to train more general models since the optical flow data is hard to obtain.
Multimodal Action Quality Assessment
To leverage multimodal information for AQA, i. e., RGB, optical flow and audio information, we propose a Progressive Adaptive Multimodal Fusion Network (PAMFN) that separately models modality-specific information and mixed-modality information.
VONet: Unsupervised Video Object Learning With Parallel U-Net Attention and Object-wise Sequential VAE
Unsupervised video object learning seeks to decompose video scenes into structural object representations without any supervision from depth, optical flow, or segmentation.