Search Results for author: Fengshun Xiao

Found 4 papers, 1 papers with code

Switching-Aligned-Words Data Augmentation for Neural Machine Translation

no code implementations1 Jan 2021 Fengshun Xiao, Zuchao Li, Hai Zhao

In neural machine translation (NMT), data augmentation methods such as back-translation make it possible to use extra monolingual data to help improve translation performance, while it needs extra training data and the in-domain monolingual data is not always available.

Data Augmentation Machine Translation +3

Hierarchical Contextualized Representation for Named Entity Recognition

1 code implementation6 Nov 2019 Ying Luo, Fengshun Xiao, Hai Zhao

In this paper, we address these two deficiencies and propose a model augmented with hierarchical contextualized representation: sentence-level representation and document-level representation.

Ranked #13 on Named Entity Recognition (NER) on Ontonotes v5 (English) (using extra training data)

named-entity-recognition Named Entity Recognition +2

Controllable Dual Skew Divergence Loss for Neural Machine Translation

no code implementations22 Aug 2019 Zuchao Li, Hai Zhao, Yingting Wu, Fengshun Xiao, Shu Jiang

Our experiments indicate that switching to the DSD loss after the convergence of ML training helps models escape local optima and stimulates stable performance improvements.

Machine Translation NMT +1

Lattice-Based Transformer Encoder for Neural Machine Translation

no code implementations ACL 2019 Fengshun Xiao, Jiangtong Li, Hai Zhao, Rui Wang, Kehai Chen

To integrate different segmentations with the state-of-the-art NMT model, Transformer, we propose lattice-based encoders to explore effective word or subword representation in an automatic way during training.

Machine Translation NMT +1

Cannot find the paper you are looking for? You can Submit a new open access paper.