Motion Compensation
62 papers with code • 0 benchmarks • 1 datasets
Benchmarks
These leaderboards are used to track progress in Motion Compensation
Latest papers
Ultrafast Cardiac Imaging Using Deep Learning For Speckle-Tracking Echocardiography
The obtained results showed that, while using only three DWs as input, the CNN-based approach yielded an image quality and a motion accuracy equivalent to those obtained by compounding 31 DWs free of motion artifacts.
Aligning Bird-Eye View Representation of Point Cloud Sequences using Scene Flow
Such concatenation is possible thanks to the removal of ego vehicle motion using its odometry.
Density Invariant Contrast Maximization for Neuromorphic Earth Observations
This is to ensure that the contrast is only high around the correct motion parameters.
Event-based Simultaneous Localization and Mapping: A Comprehensive Survey
This paper presents a timely and comprehensive review of event-based vSLAM algorithms that exploit the benefits of asynchronous and irregular event streams for localization and mapping tasks.
Gradient-Based Geometry Learning for Fan-Beam CT Reconstruction
The cost function is parameterized by a trained neural network which regresses an image quality metric from the motion affected reconstruction alone.
Weakly-Supervised Optical Flow Estimation for Time-of-Flight
Indirect Time-of-Flight (iToF) cameras are a widespread type of 3D sensor, which perform multiple captures to obtain depth values of the captured scene.
qDWI-Morph: Motion-compensated quantitative Diffusion-Weighted MRI analysis for fetal lung maturity assessment
Our approach couples a registration sub-network with a quantitative DWI model fitting sub-network.
Exploring Long- and Short-Range Temporal Information for Learned Video Compression
Learned video compression methods have gained a variety of interest in the video coding community since they have matched or even exceeded the rate-distortion (RD) performance of traditional video codecs.
Fast-Vid2Vid: Spatial-Temporal Compression for Video-to-Video Synthesis
In this paper, we present a spatial-temporal compression framework, \textbf{Fast-Vid2Vid}, which focuses on data aspects of generative models.
Real-Time Video Deblurring via Lightweight Motion Compensation
While motion compensation greatly improves video deblurring quality, separately performing motion compensation and video deblurring demands huge computational overhead.