Motion Estimation is used to determine the block-wise or pixel-wise motion vectors between two frames.
This paper proposes a vision-based method for video sky replacement and harmonization, which can automatically generate realistic and dramatic sky backgrounds in videos with controllable styles.
To alleviate this weakness, we propose a new globally optimal event-based motion estimation algorithm.
A recent alternative is to use event sensors, which could enable more energy efficient and faster star trackers.
Deep learning for predicting or generating 3D human pose sequences is an active research area.
Human motion modelling is a classical problem at the intersection of graphics and computer vision, with applications spanning human-computer interaction, motion synthesis, and motion prediction for virtual and augmented reality.
Self-supervised learning has emerged as a powerful tool for depth and ego-motion estimation, leading to state-of-the-art results on benchmark datasets.