Video Matting
10 papers with code • 1 benchmarks • 4 datasets
Image credit: https://arxiv.org/pdf/2012.07810v1.pdf
Most implemented papers
MODNet: Real-Time Trimap-Free Portrait Matting via Objective Decomposition
MODNet is easy to be trained in an end-to-end manner.
Flow-based Video Segmentation for Human Head and Shoulders
Video segmentation for the human head and shoulders is essential in creating elegant media for videoconferencing and virtual reality applications.
Deep Video Matting via Spatio-Temporal Alignment and Aggregation
Despite the significant progress made by deep learning in natural image matting, there has been so far no representative work on deep learning for video matting due to the inherent technical challenges in reasoning temporal domain and lack of large-scale video matting datasets.
Attention-guided Temporally Coherent Video Object Matting
Experimental results show that our method can generate high-quality alpha mattes for various videos featuring appearance change, occlusion, and fast motion.
Robust High-Resolution Video Matting with Temporal Guidance
We introduce a robust, real-time, high-resolution human video matting method that achieves new state-of-the-art performance.
One-Trimap Video Matting
A key of OTVM is the joint modeling of trimap propagation and alpha prediction.
VMFormer: End-to-End Video Matting with Transformer
In this paper, we propose VMFormer: a transformer-based end-to-end method for video matting.
Ultrahigh Resolution Image/Video Matting With Spatio-Temporal Sparsity
Instead, our method resorts to spatial and temporal sparsity for solving general UHR matting.
End-to-End Video Matting With Trimap Propagation
Although recent studies exploit video object segmentation methods to propagate the given trimaps, they suffer inconsistent results.
Video Instance Matting
To remedy this deficiency, we propose Video Instance Matting~(VIM), that is, estimating alpha mattes of each instance at each frame of a video sequence.