Motion Compensation
61 papers with code • 0 benchmarks • 1 datasets
Benchmarks
These leaderboards are used to track progress in Motion Compensation
Latest papers
BoostTrack: boosting the similarity measure and detection confidence for improved multiple object tracking
To utilize low-detection score bounding boxes in one-stage association, we propose to boost the confidence scores of two groups of detections: the detections we assume to correspond to the existing tracked object, and the detections we assume to correspond to a previously undetected object.
IVIM-Morph: Motion-compensated quantitative Intra-voxel Incoherent Motion (IVIM) analysis for functional fetal lung maturity assessment from diffusion-weighted MRI data
IVIM-morph combines two sub-networks, a registration sub-network, and an IVIM model fitting sub-network, enabling simultaneous estimation of IVIM model parameters and motion.
UCMCTrack: Multi-Object Tracking with Uniform Camera Motion Compensation
In response to this, we introduce UCMCTrack, a novel motion model-based tracker robust to camera movements.
Geometry-Corrected Geodesic Motion Modeling with Per-Frame Camera Motion for 360-Degree Video Compression
The large amounts of data associated with 360-degree video require highly effective compression techniques for efficient storage and distribution.
IBVC: Interpolation-driven B-frame Video Compression
Learned B-frame video compression aims to adopt bi-directional motion estimation and motion compensation (MEMC) coding for middle frame reconstruction.
LAN-HDR: Luminance-based Alignment Network for High Dynamic Range Video Reconstruction
In this paper, we propose an end-to-end HDR video composition framework, which aligns LDR frames in the feature space and then merges aligned features into an HDR frame, without relying on pixel-domain optical flow.
High Dynamic Range Imaging of Dynamic Scenes with Saturation Compensation but without Explicit Motion Compensation
For HDR imaging, some methods capture multiple low dynamic range (LDR) images with altering exposures to aggregate more information.
Pedestrian Environment Model for Automated Driving
We only use images from a monocular camera and the vehicle's localization data as input to our pedestrian environment model.
Multi-Scale Deformable Alignment and Content-Adaptive Inference for Flexible-Rate Bi-Directional Video Compression
The lack of ability to adapt the motion compensation model to video content is an important limitation of current end-to-end learned video compression models.
Ultrafast Cardiac Imaging Using Deep Learning For Speckle-Tracking Echocardiography
The obtained results showed that, while using only three DWs as input, the CNN-based approach yielded an image quality and a motion accuracy equivalent to those obtained by compounding 31 DWs free of motion artifacts.