To achieve the above properties, we propose a simple yet effective long range pooling (LRP) module using dilation max pooling, which provides a network with a large adaptive receptive field.
Notably, SegNeXt outperforms EfficientNet-L2 w/ NAS-FPN and achieves 90. 6% mIoU on the Pascal VOC 2012 test leaderboard using only 1/10 parameters of it.
Ranked #1 on Semantic Segmentation on iSAID
In this paper, we propose a novel linear attention named large kernel attention (LKA) to enable self-adaptive and long-range correlations in self-attention while avoiding its shortcomings.
Ranked #1 on Panoptic Segmentation on COCO panoptic
Humans can naturally and effectively find salient regions in complex scenes.
As an essential ingredient of modern deep learning, attention mechanism, especially self-attention, plays a vital role in the global correlation discovery.
Ranked #7 on Semantic Segmentation on PASCAL VOC 2012 test
Meshes with arbitrary connectivity can be remeshed to have Loop subdivision sequence connectivity via self-parameterization, making SubdivNet a general approach.
In the first week of May, 2021, researchers from four different institutions: Google, Tsinghua University, Oxford University and Facebook, shared their latest work [16, 7, 12, 17] on arXiv. org almost at the same time, each proposing new learning architectures, consisting mainly of linear layers, claiming them to be comparable, or even superior to convolutional-based models.
Attention mechanisms, especially self-attention, have played an increasingly important role in deep feature representation for visual tasks.
Ranked #16 on Semantic Segmentation on PASCAL VOC 2012 test
It is inherently permutation invariant for processing a sequence of points, making it well-suited for point cloud learning.
Ranked #2 on 3D Point Cloud Classification on IntrA