Search Results

Contextual Transformer Networks for Visual Recognition

xmu-xiaoma666/External-Attention-pytorch 26 Jul 2021

Such design fully capitalizes on the contextual information among input keys to guide the learning of dynamic attention matrix and thus strengthens the capacity of visual representation.

Image Classification Instance Segmentation +3

ResT: An Efficient Transformer for Visual Recognition

xmu-xiaoma666/External-Attention-pytorch NeurIPS 2021

This paper presents an efficient multi-scale vision Transformer, called ResT, that capably served as a general-purpose backbone for image recognition.

Image Classification

Dual Attention Network for Scene Segmentation

xmu-xiaoma666/External-Attention-pytorch CVPR 2019

Specifically, we append two types of attention modules on top of traditional dilated FCN, which model the semantic interdependencies in spatial and channel dimensions respectively.

Position Segmentation +1

Dynamic Convolution: Attention over Convolution Kernels

xmu-xiaoma666/External-Attention-pytorch CVPR 2020

Light-weight convolutional neural networks (CNNs) suffer performance degradation as their low computational budgets constrain both the depth (number of convolution layers) and the width (number of channels) of CNNs, resulting in limited representation capability.

Image Classification Keypoint Detection

Squeeze-and-Excitation Networks

xmu-xiaoma666/External-Attention-pytorch CVPR 2018

Squeeze-and-Excitation Networks formed the foundation of our ILSVRC 2017 classification submission which won first place and reduced the top-5 error to 2. 251%, surpassing the winning entry of 2016 by a relative improvement of ~25%.

Image Classification

Attention Is All You Need

xmu-xiaoma666/External-Attention-pytorch NeurIPS 2017

The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration.

Ranked #2 on Multimodal Machine Translation on Multi30K (BLUE (DE-EN) metric)

Abstractive Text Summarization Coreference Resolution +8

CBAM: Convolutional Block Attention Module

xmu-xiaoma666/External-Attention-pytorch ECCV 2018

We propose Convolutional Block Attention Module (CBAM), a simple yet effective attention module for feed-forward convolutional neural networks.

General Classification Image Classification

SA-Net: Shuffle Attention for Deep Convolutional Neural Networks

xmu-xiaoma666/External-Attention-pytorch 30 Jan 2021

In this paper, we propose an efficient Shuffle Attention (SA) module to address this issue, which adopts Shuffle Units to combine two types of attention mechanisms effectively.

Instance Segmentation object-detection +2