no code implementations • 22 Aug 2019 • Zuchao Li, Hai Zhao, Yingting Wu, Fengshun Xiao, Shu Jiang
Our experiments indicate that switching to the DSD loss after the convergence of ML training helps models escape local optima and stimulates stable performance improvements.
no code implementations • CONLL 2018 • Yingting Wu, Hai Zhao, Jia-Jun Tong
This paper describes the system of our team Phoenix for participating CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies.
1 code implementation • 25 Jul 2018 • Yingting Wu, Hai Zhao
For different language pairs, word-level neural machine translation (NMT) models with a fixed-size vocabulary suffer from the same problem of representing out-of-vocabulary (OOV) words.