Attention Modules

Peer-attention is a network component which dynamically learns the attention weights using another block or input modality. This is unlike AssembleNet which partially relies on exponential mutations to explore connections. Once the attention weights are found, we can either prune the connections by only leaving the argmax over $h$ or leave them with softmax.

Source: AssembleNet++: Assembling Modality Representations via Attention Connections

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Action Classification 1 50.00%
Activity Recognition 1 50.00%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories