Attention

\begin{equation} QA\left( x \right) = \sigma\left( f\left( x \right)^{1x1} \right) + x \end{equation}

Quick Attention takes in the feature map as an input WxHxC (Width x Height x Channels) and creates two instances of the input feature map then it performs the 1x1xC convolution on the first instance and calculates the sigmoid activations after that it is added with the second instance to generate the final attention map as output which is of same dimensions as of input.

Source: HistoSeg : Quick attention with multi-loss function for multi-structure segmentation in digital histology images

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Boundary Detection 1 25.00%
Image Segmentation 1 25.00%
Medical Image Segmentation 1 25.00%
Semantic Segmentation 1 25.00%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories