StableNet: Semi-Online, Multi-Scale Deep Video Stabilization

24 Jul 2019  ·  Chia-Hung Huang, Hang Yin, Yu-Wing Tai, Chi-Keung Tang ·

Video stabilization algorithms are of greater importance nowadays with the prevalence of hand-held devices which unavoidably produce videos with undesirable shaky motions. In this paper we propose a data-driven online video stabilization method along with a paired dataset for deep learning. The network processes each unsteady frame progressively in a multi-scale manner, from low resolution to high resolution, and then outputs an affine transformation to stabilize the frame. Different from conventional methods which require explicit feature tracking or optical flow estimation, the underlying stabilization process is learned implicitly from the training data, and the stabilization process can be done online. Since there are limited public video stabilization datasets available, we synthesized unstable videos with different extent of shake that simulate real-life camera movement. Experiments show that our method is able to outperform other stabilization methods in several unstable samples while remaining comparable in general. Also, our method is tested on complex contents and found robust enough to dampen these samples to some extent even it was not explicitly trained in the contents.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here