Deep Attention
37 papers with code • 0 benchmarks • 2 datasets
Benchmarks
These leaderboards are used to track progress in Deep Attention
Latest papers
Restoring Snow-Degraded Single Images With Wavelet in Vision Transformer
In our experiments, we evaluated the performance of our model on the popular SRRS, SNOW100K, and CSD datasets, respectively.
Deep Attention Q-Network for Personalized Treatment Recommendation
Tailoring treatment for individual patients is crucial yet challenging in order to achieve optimal healthcare outcomes.
Towards Deep Attention in Graph Neural Networks: Problems and Remedies
AERO-GNN provably mitigates the proposed problems of deep graph attention, which is further empirically demonstrated with (a) its adaptive and less smooth attention functions and (b) higher performance at deep layers (up to 64).
AIA: Attention in Attention Within Collaborate Domains
Attention mechanisms can effectively improve the performance of the mobile networks with a limited computational complexity cost.
Open-CyKG: An Open Cyber Threat Intelligence Knowledge Graph
Instant analysis of cybersecurity reports is a fundamental challenge for security experts as an immeasurable amount of cyber information is generated on a daily basis, which necessitates automated information extraction tools to facilitate querying and retrieval of data.
Deep Attention-guided Graph Clustering with Dual Self-supervision
Existing deep embedding clustering works only consider the deepest layer to learn a feature embedding and thus fail to well utilize the available discriminative information from cluster assignments, resulting performance limitation.
Label Cleaning Multiple Instance Learning: Refining Coarse Annotations on Single Whole-Slide Images
Annotating cancerous regions in whole-slide images (WSIs) of pathology samples plays a critical role in clinical diagnosis, biomedical research, and machine learning algorithms development.
Thermal Image Super-Resolution Using Second-Order Channel Attention with Varying Receptive Fields
Specifically, we explore how to effectively attend to contrasting receptive fields (RFs) where increasing the RFs of a network can be computationally expensive.
Grid Partitioned Attention: Efficient TransformerApproximation with Inductive Bias for High Resolution Detail Generation
Attention is a general reasoning mechanism than can flexibly deal with image information, but its memory requirements had made it so far impractical for high resolution image generation.
HIT: A Hierarchically Fused Deep Attention Network for Robust Code-mixed Language Representation
In this paper, we propose HIT, a robust representation learning method for code-mixed texts.