Image Matting
96 papers with code • 8 benchmarks • 8 datasets
Image Matting is the process of accurately estimating the foreground object in images and videos. It is a very important technique in image and video editing applications, particularly in film production for creating visual effects. In case of image segmentation, we segment the image into foreground and background by labeling the pixels. Image segmentation generates a binary image, in which a pixel either belongs to foreground or background. However, Image Matting is different from the image segmentation, wherein some pixels may belong to foreground as well as background, such pixels are called partial or mixed pixels. In order to fully separate the foreground from the background in an image, accurate estimation of the alpha values for partial or mixed pixels is necessary.
Source: Automatic Trimap Generation for Image Matting
Image Source: Real-Time High-Resolution Background Matting
Libraries
Use these libraries to find Image Matting models and implementationsDatasets
Latest papers
Learning Trimaps via Clicks for Image Matting
Despite significant advancements in image matting, existing models heavily depend on manually-drawn trimaps for accurate results in natural image scenarios.
In-Context Matting
We introduce in-context matting, a novel task setting of image matting.
End-to-End Human Instance Matting
Finally, an instance matting network decodes the image features and united semantics guidance to predict all instance-level alpha mattes.
Dual-Context Aggregation for Universal Image Matting
However, existing matting methods are designed for specific objects or guidance, neglecting the common requirement of aggregating global and local contexts in image matting.
Diffusion for Natural Image Matting
However, the presence of high computational overhead and the inconsistency of noise sampling between the training and inference processes pose significant obstacles to achieving this goal.
Video Instance Matting
To remedy this deficiency, we propose Video Instance Matting~(VIM), that is, estimating alpha mattes of each instance at each frame of a video sequence.
OmnimatteRF: Robust Omnimatte with 3D Background Modeling
Video matting has broad applications, from adding interesting effects to casually captured movies to assisting video production professionals.
On Point Affiliation in Feature Upsampling
We introduce the notion of point affiliation into feature upsampling.
Matting Anything
In this paper, we propose the Matting Anything Model (MAM), an efficient and versatile framework for estimating the alpha matte of any instance in an image with flexible and interactive visual or linguistic user prompt guidance.
Matte Anything: Interactive Natural Image Matting with Segment Anything Models
In our work, we leverage vision foundation models to enhance the performance of natural image matting.