In this paper, we propose a novel linear attention named large kernel attention (LKA) to enable self-adaptive and long-range correlations in self-attention while avoiding its shortcomings.
Ranked #1 on Panoptic Segmentation on COCO panoptic
Humans can naturally and effectively find salient regions in complex scenes.
Meshes with arbitrary connectivity can be remeshed to have Loop subdivision sequence connectivity via self-parameterization, making SubdivNet a general approach.
In the first week of May, 2021, researchers from four different institutions: Google, Tsinghua University, Oxford University and Facebook, shared their latest work [16, 7, 12, 17] on arXiv. org almost at the same time, each proposing new learning architectures, consisting mainly of linear layers, claiming them to be comparable, or even superior to convolutional-based models.
Attention mechanisms, especially self-attention, have played an increasingly important role in deep feature representation for visual tasks.
Ranked #16 on Semantic Segmentation on PASCAL VOC 2012 test
It is inherently permutation invariant for processing a sequence of points, making it well-suited for point cloud learning.
Ranked #2 on 3D Point Cloud Classification on IntrA
We present a data-driven approach to reconstructing high-resolution and detailed volumetric representations of 3D shapes.