Seq2Seq, or Sequence To Sequence, is a model used in sequence prediction tasks, such as language modelling and machine translation. The idea is to use one LSTM, the encoder, to read the input sequence one timestep at a time, to obtain a large fixed dimensional vector representation (a context vector), and then to use another LSTM, the decoder, to extract the output sequence from that vector. The second LSTM is essentially a recurrent neural network language model except that it is conditioned on the input sequence.
(Note that this page refers to the original seq2seq not general sequence-to-sequence models)
Source: Sequence to Sequence Learning with Neural NetworksPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Machine Translation | 78 | 8.98% |
Text Generation | 47 | 5.41% |
Language Modelling | 45 | 5.18% |
Semantic Parsing | 41 | 4.72% |
Speech Recognition | 22 | 2.53% |
Question Answering | 21 | 2.42% |
Abstractive Text Summarization | 21 | 2.42% |
Text Summarization | 20 | 2.30% |
Response Generation | 18 | 2.07% |