DV3 Attention Block is an attention-based module used in the Deep Voice 3 architecture. It uses a dot-product attention mechanism. A query vector (the hidden states of the decoder) and the per-timestep key vectors from the encoder are used to compute attention weights. This then outputs a context vector computed as the weighted average of the value vectors.
Source: Deep Voice 3: Scaling Text-to-Speech with Convolutional Sequence LearningPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Speech Synthesis | 4 | 40.00% |
Domain Adaptation | 2 | 20.00% |
Unsupervised Domain Adaptation | 2 | 20.00% |
Melody Extraction | 1 | 10.00% |
Text-To-Speech Synthesis | 1 | 10.00% |
Component | Type |
|
---|---|---|
![]() |
Feedforward Networks | |
![]() |
Regularization | |
![]() |
Attention Mechanisms | |
![]() |
Output Functions |