Deep Attention
40 papers with code • 0 benchmarks • 2 datasets
Benchmarks
These leaderboards are used to track progress in Deep Attention
Most implemented papers
PREDATOR: Registration of 3D Point Clouds with Low Overlap
We introduce PREDATOR, a model for pairwise point-cloud registration with deep attention to the overlap region.
Deep Attention Recurrent Q-Network
A deep learning approach to reinforcement learning led to a general learner able to train on visual input to play a variety of arcade games at the human and superhuman levels.
Learning to Segment from Scribbles using Multi-scale Adversarial Attention Gates
We evaluated our model on several medical (ACDC, LVSC, CHAOS) and non-medical (PPSS) datasets, and we report performance levels matching those achieved by models trained with fully annotated segmentation masks.
Processing Megapixel Images with Deep Attention-Sampling Models
We show that sampling from the attention distribution results in an unbiased estimator of the full model with minimal variance, and we derive an unbiased estimator of the gradient that we use to train our model end-to-end with a normal SGD procedure.
Your Local GAN: Designing Two Dimensional Local Attention Mechanisms for Generative Models
We introduce a new local sparse attention layer that preserves two-dimensional geometry and locality.
Deep Attention Aware Feature Learning for Person Re-Identification
Visual attention has proven to be effective in improving the performance of person re-identification.
Deep multi-stations weather forecasting: explainable recurrent convolutional neural networks
Deep learning applied to weather forecasting has started gaining popularity because of the progress achieved by data-driven models.
Bridging Textual and Tabular Data for Cross-Domain Text-to-SQL Semantic Parsing
We present BRIDGE, a powerful sequential architecture for modeling dependencies between natural language questions and relational databases in cross-DB semantic parsing.
Restoring Snow-Degraded Single Images With Wavelet in Vision Transformer
In our experiments, we evaluated the performance of our model on the popular SRRS, SNOW100K, and CSD datasets, respectively.
Deep attention-based classification network for robust depth prediction
However, robust depth prediction suffers from two challenging problems: a) How to extract more discriminative features for different scenes (compared to a single scene)?