Attention Modules

Cross-Scale Non-Local Attention, or CS-NL, is a non-local attention module for image super-resolution deep networks. It learns to mine long-range dependencies between LR features to larger-scale HR patches within the same feature map. Specifically, suppose we are conducting an s-scale super-resolution with the module, given a feature map $X$ of spatial size $(W, H)$, we first bilinearly downsample it to $Y$ with scale $s$, and match the $p\times p$ patches in $X$ with the downsampled $p \times p$ candidates in $Y$ to obtain the softmax matching score. Finally, we conduct deconvolution.on the score by weighted adding the patches of size $\left(sp, sp\right)$ extracted from $X$. The obtained $Z$ of size $(sW, sH)$ will be $s$ times super-resolved than $X$.

Source: Image Super-Resolution with Cross-Scale Non-Local Attention and Exhaustive Self-Exemplars Mining

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Image Super-Resolution 1 50.00%
Super-Resolution 1 50.00%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories