no code implementations • CVPR 2021 • Dongsheng Ruan, Daiyin Wang, Yuan Zheng, Nenggan Zheng, Min Zheng
These approaches commonly learn the relationship between global contexts and attention activations by using fully-connected layers or linear transformations.