A Gated Recurrent Unit, or GRU, is a type of recurrent neural network. It is similar to an LSTM, but only has two gates - a reset gate and an update gate - and notably lacks an output gate. Fewer parameters means GRUs are generally easier/faster to train than their LSTM counterparts.
Image Source: here
Source: Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine TranslationPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Time Series Analysis | 53 | 7.21% |
Speech Synthesis | 39 | 5.31% |
Language Modelling | 27 | 3.67% |
General Classification | 27 | 3.67% |
Sentiment Analysis | 24 | 3.27% |
Classification | 18 | 2.45% |
Text-To-Speech Synthesis | 15 | 2.04% |
Time Series Forecasting | 15 | 2.04% |
Speech Recognition | 14 | 1.90% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |