A Memory Network provides a memory component that can be read from and written to with the inference capabilities of a neural network model. The motivation is that many neural networks lack a long-term memory component, and their existing memory component encoded by states and weights is too small and not compartmentalized enough to accurately remember facts from the past (RNNs for example, have difficult memorizing and doing tasks like copying).
A memory network consists of a memory $\textbf{m}$ (an array of objects indexed by $\textbf{m}_{i}$ and four potentially learned components:
Given an input $x$ (e.g., an input character, word or sentence depending on the granularity chosen, an image or an audio signal) the flow of the model is as follows:
This process is applied at both train and test time, if there is a distinction between such phases, that is, memories are also stored at test time, but the model parameters of $I$, $G$, $O$ and $R$ are not updated. Memory networks cover a wide class of possible implementations. The components $I$, $G$, $O$ and $R$ can potentially use any existing ideas from the machine learning literature.
Image Source: Adrian Colyer
Source: Memory NetworksPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Question Answering | 36 | 3.90% |
General Classification | 26 | 2.81% |
Semantic Segmentation | 25 | 2.71% |
Prediction | 23 | 2.49% |
Time Series Analysis | 23 | 2.49% |
Video Semantic Segmentation | 20 | 2.16% |
Decoder | 20 | 2.16% |
Object | 20 | 2.16% |
Sentence | 20 | 2.16% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |