Humans can naturally and effectively find salient regions in complex scenes.
As an essential ingredient of modern deep learning, attention mechanism, especially self-attention, plays a vital role in the global correlation discovery.
Meshes with arbitrary connectivity can be remeshed to hold Loop subdivision sequence connectivity via self-parameterization, making SubdivNet a general approach.
In the first week of May, 2021, researchers from four different institutions: Google, Tsinghua University, Oxford University and Facebook, shared their latest work [16, 7, 12, 17] on arXiv. org almost at the same time, each proposing new learning architectures, consisting mainly of linear layers, claiming them to be comparable, or even superior to convolutional-based models.
Attention mechanisms, especially self-attention, have played an increasingly important role in deep feature representation for visual tasks.
Ranked #10 on Semantic Segmentation on Cityscapes val
It is inherently permutation invariant for processing a sequence of points, making it well-suited for point cloud learning.
Ranked #11 on 3D Part Segmentation on ShapeNet-Part (Instance Average IoU metric)