Hard Attention
38 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Hard Attention
Libraries
Use these libraries to find Hard Attention models and implementationsMost implemented papers
Recurrent Models of Visual Attention
Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels.
Deep Attention Recurrent Q-Network
A deep learning approach to reinforcement learning led to a general learner able to train on visual input to play a variety of arcade games at the human and superhuman levels.
Dual Attention Networks for Few-Shot Fine-Grained Recognition
Specifically, by producing attention guidance from deep activations of input images, our hard-attention is realized by keeping a few useful deep descriptors and forming them as a bag of multi-instance learning.
Overcoming catastrophic forgetting with hard attention to the task
In this paper, we propose a task-based hard attention mechanism that preserves previous tasks' information without affecting the current task's learning.
Hard Non-Monotonic Attention for Character-Level Transduction
We compare soft and hard non-monotonic attention experimentally and find that the exact algorithm significantly improves performance over the stochastic approximation and outperforms soft attention.
Exact Hard Monotonic Attention for Character-Level Transduction
Our models achieve state-of-the-art performance on morphological inflection.
Saccader: Improving Accuracy of Hard Attention Models for Vision
Although deep convolutional neural networks achieve state-of-the-art performance across nearly all image classification tasks, their decisions are difficult to interpret.
Progressive Attention Networks for Visual Attribute Prediction
We propose a novel attention model that can accurately attends to target objects of various scales and shapes in images.
Morphological Inflection Generation with Hard Monotonic Attention
We present a neural model for morphological inflection generation which employs a hard attention mechanism, inspired by the nearly-monotonic alignment commonly found between the characters in a word and the characters in its inflection.
Reinforced Self-Attention Network: a Hybrid of Hard and Soft Attention for Sequence Modeling
In this paper, we integrate both soft and hard attention into one context fusion model, "reinforced self-attention (ReSA)", for the mutual benefit of each other.