Referring Image Matting

3 papers with code • 0 benchmarks • 0 datasets

Extracting the meticulous alpha matte of the specific object from the image that can best match the given natural language description, e.g., a keyword or a expression.

Most implemented papers

Referring Image Matting

jizhizili/rim CVPR 2023

Different from conventional image matting, which either requires user-defined scribbles/trimap to extract a specific foreground object or directly extracts all the foreground objects in the image indiscriminately, we introduce a new task named Referring Image Matting (RIM) in this paper, which aims to extract the meticulous alpha matte of the specific object that best matches the given natural language description, thus enabling a more natural and simpler instruction for image matting.

Deep Image Matting: A Comprehensive Survey

jizhizili/matting-survey 10 Apr 2023

Image matting refers to extracting precise alpha matte from natural images, and it plays a critical role in various downstream applications, such as image editing.

Matting Anything

shi-labs/matting-anything 8 Jun 2023

In this paper, we propose the Matting Anything Model (MAM), an efficient and versatile framework for estimating the alpha matte of any instance in an image with flexible and interactive visual or linguistic user prompt guidance.