Methods > Computer Vision > Semantic Segmentation Models

Criss-Cross Network

Introduced by Huang et al. in CCNet: Criss-Cross Attention for Semantic Segmentation

Criss-Cross Network (CCNet) aims to obtain full-image contextual information in an effective and efficient way. Concretely, for each pixel, a novel criss-cross attention module harvests the contextual information of all the pixels on its criss-cross path. By taking a further recurrent operation, each pixel can finally capture the full-image dependencies. CCNet is with the following merits: 1) GPU memory friendly. Compared with the non-local block, the proposed recurrent criss-cross attention module requires 11× less GPU memory usage. 2) High computational efficiency. The recurrent criss-cross attention significantly reduces FLOPs by about 85% of the non-local block. 3) The state-of-the-art performance.

Source: CCNet: Criss-Cross Attention for Semantic Segmentation

Latest Papers

PAPER DATE
CCNet: Criss-Cross Attention for Semantic Segmentation
| Zilong HuangXinggang WangYunchao WeiLichao HuangHumphrey ShiWenyu LiuThomas S. Huang
2018-11-28

Tasks

TASK PAPERS SHARE
Human Parsing 1 20.00%
Instance Segmentation 1 20.00%
Object Detection 1 20.00%
Semantic Segmentation 1 20.00%
Video Semantic Segmentation 1 20.00%

Components

COMPONENT TYPE
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories