no code implementations • 26 Oct 2020 • Ciprian Chelba, Junpei Zhou, Yuezhang, Li, Hideto Kazawa, Jeff Klingner, Mengmeng Niu
For an English-Spanish translation model operating at $SACC = 0. 89$ according to a non-expert annotator pool we can derive a confidence estimate that labels 0. 5-0. 6 of the $good$ translations in an "in-domain" test set with 0. 95 Precision.
no code implementations • 2 May 2020 • Junpei Zhou, Ciprian Chelba, Yuezhang, Li
Sentence level quality estimation (QE) for machine translation (MT) attempts to predict the translation edit rate (TER) cost of post-editing work required to correct MT output.
no code implementations • WS 2019 • Junpei Zhou, Zhisong Zhang, Zecong Hu
In WMT-2019 QE task, our system ranked in the second place on En-De NMT dataset and the third place on En-Ru NMT dataset.
no code implementations • 1 Nov 2018 • Haojie Pan, Junpei Zhou, Zhou Zhao, Yan Liu, Deng Cai, Min Yang
We first propose a new task named Dialogue Description (Dial2Desc).
no code implementations • WS 2018 • Chen Li, Junpei Zhou, Zuyi Bao, Hengyou Liu, Guangwei Xu, Linlin Li
In the correction stage, candidates were generated by the three GEC models and then merged to output the final corrections for M and S types.
no code implementations • CVPR 2018 • Ruiqi Gao, Yang Lu, Junpei Zhou, Song-Chun Zhu, Ying Nian Wu
Within each iteration of our learning algorithm, for each observed training image, we generate synthesized images at multiple grids by initializing the finite-step MCMC sampling from a minimal 1 x 1 version of the training image.