Search Results for author: Anxhelo Diko

Found 1 papers, 1 papers with code

ReViT: Enhancing Vision Transformers with Attention Residual Connections for Visual Recognition

1 code implementation17 Feb 2024 Anxhelo Diko, Danilo Avola, Marco Cascio, Luigi Cinque

Vision Transformer (ViT) self-attention mechanism is characterized by feature collapse in deeper layers, resulting in the vanishing of low-level visual features.

Image Classification Instance Segmentation +3

Cannot find the paper you are looking for? You can Submit a new open access paper.