In this year, we mainly build our systems based on Transformer architectures in all tasks and focus on the end-to-end speech translation (E2E-ST).
In the perspective of a layer normalization (LN) position, the architecture of Transformers can be categorized into two types: Post-LN and Pre-LN.
Masked language models (MLMs) have contributed to drastic performance improvements with regard to zero anaphora resolution (ZAR).
We often use perturbations to regularize neural models.
Ranked #1 on Text Summarization on DUC 2004 Task 1 (using extra training data)
This study addresses two underexplored issues on CDA, that is, how to reduce the computational cost of data augmentation and how to ensure the quality of the generated data.
Existing approaches for grammatical error correction (GEC) largely rely on supervised learning with manually created GEC datasets.
The answer to this question is not as straightforward as one might expect because the previous common methods for incorporating a MLM into an EncDec model have potential drawbacks when applied to GEC.
Ranked #2 on Grammatical Error Correction on JFLEG
We present ESPnet-ST, which is designed for the quick development of speech-to-speech translation systems in a single framework.
The incorporation of pseudo data in the training of grammatical error correction models has been one of the main factors in improving the performance of such models.
Ranked #5 on Grammatical Error Correction on BEA-2019 (test)
A regularization technique based on adversarial perturbation, which was initially developed in the field of image processing, has been successfully applied to text classification tasks and has yielded attractive improvements.
Developing a method for understanding the inner workings of black-box neural methods is an important research endeavor.
We also demonstrate that our method has the more data, better performance property with promising scalability to the amount of unlabeled data.
The encoder-decoder model is widely used in natural language generation tasks.