1 code implementation • 19 Dec 2022 • Wenda Xu, Xian Qian, Mingxuan Wang, Lei LI, William Yang Wang
Existing learned metrics have gaps to human judgements, are model-dependent or are limited to the domains or tasks where human ratings are available.
no code implementations • 20 Oct 2022 • Xian Qian, Kai Hu, Jiaqiang Wang, Yifeng Liu, Xingyuan Pan, Jun Cao, Mingxuan Wang
This report describes our VolcTrans system for the WMT22 shared task on large-scale multilingual machine translation.
1 code implementation • 7 Oct 2022 • Jiangtao Feng, Yi Zhou, Jun Zhang, Xian Qian, Liwei Wu, Zhexi Zhang, Yanming Liu, Mingxuan Wang, Lei LI, Hao Zhou
PARAGEN is a PyTorch-based NLP toolkit for further development on parallel generation.
1 code implementation • 12 Oct 2021 • Xiaohui Wang, Yang Wei, Ying Xiong, Guyue Huang, Xian Qian, Yufei Ding, Mingxuan Wang, Lei LI
In this paper, we present LightSeq2, a system to accelerate training for a general family of Transformer models on GPUs.
no code implementations • CONLL 2017 • Xian Qian, Yang Liu
For this year{'}s multilingual dependency parsing shared task, we developed a pipeline system, which uses a variety of features for each of its components.
no code implementations • TACL 2014 • Xian Qian, Yang Liu
We show that the decoding problem in generalized Higher Order Conditional Random Fields (CRFs) can be decomposed into two parts: one is a tree labeling problem that can be solved in linear time using dynamic programming; the other is a supermodular quadratic pseudo-Boolean maximization problem, which can be solved in cubic time using a minimum cut algorithm.
no code implementations • TACL 2013 • Xian Qian, Yang Liu
Graph based dependency parsing is inefficient when handling non-local features due to high computational complexity of inference.