Seq2Seq, or Sequence To Sequence, is a model used in sequence prediction tasks, such as language modelling and machine translation. The idea is to use one LSTM, the encoder, to read the input sequence one timestep at a time, to obtain a large fixed dimensional vector representation (a context vector), and then to use another LSTM, the decoder, to extract the output sequence from that vector. The second LSTM is essentially a recurrent neural network language model except that it is conditioned on the input sequence.
(Note that this page refers to the original seq2seq not general sequence-to-sequence models)
Source: Sequence to Sequence Learning with Neural NetworksPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Machine Translation | 91 | 11.61% |
Text Generation | 44 | 5.61% |
Language Modelling | 42 | 5.36% |
Semantic Parsing | 27 | 3.44% |
Speech Recognition | 27 | 3.44% |
Abstractive Text Summarization | 24 | 3.06% |
Question Answering | 23 | 2.93% |
Text Summarization | 23 | 2.93% |
Response Generation | 21 | 2.68% |