Attention Modules

Triplet Attention

Introduced by Misra et al. in Rotate to Attend: Convolutional Triplet Attention Module

Triplet attention comprises of three branches each responsible for capturing crossdimension between the spatial dimensions and channel dimension of the input. Given an input tensor with shape (C × H × W), each branch is responsible for aggregating cross-dimensional interactive features between either the spatial dimension H or W and the channel dimension C.

Source: Rotate to Attend: Convolutional Triplet Attention Module

Papers


Paper Code Results Date Stars

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories