Search Results for author: Xiang Geng

Found 11 papers, 5 papers with code

NJU’s submission to the WMT20 QE Shared Task

no code implementations WMT (EMNLP) 2020 Qu Cui, Xiang Geng, ShuJian Huang, Jiajun Chen

This paper describes our system of the sentence-level and word-level Quality Estimation Shared Task of WMT20.

Language Modelling Sentence

From Handcrafted Features to LLMs: A Brief Survey for Machine Translation Quality Estimation

no code implementations21 Mar 2024 Haofei Zhao, Yilun Liu, Shimin Tao, Weibin Meng, Yimeng Chen, Xiang Geng, Chang Su, Min Zhang, Hao Yang

Machine Translation Quality Estimation (MTQE) is the task of estimating the quality of machine-translated text in real time without the need for reference translations, which is of great importance for the development of MT.

Machine Translation Sentence

MAPO: Advancing Multilingual Reasoning through Multilingual Alignment-as-Preference Optimization

1 code implementation12 Jan 2024 Shuaijie She, Wei Zou, ShuJian Huang, Wenhao Zhu, Xiang Liu, Xiang Geng, Jiajun Chen

To enhance reasoning abilities in non-dominant languages, we propose a Multilingual-Alignment-as-Preference Optimization framework (MAPO), aiming to align the reasoning processes in other languages with the dominant language.

Mathematical Reasoning

Lost in the Source Language: How Large Language Models Evaluate the Quality of Machine Translation

1 code implementation12 Jan 2024 Xu Huang, Zhirui Zhang, Xiang Geng, Yichao Du, Jiajun Chen, ShuJian Huang

Large Language Models (LLMs) have achieved remarkable results in the machine translation evaluation task, yet there remains a gap in knowledge regarding how they utilize the provided data to conduct evaluations.

Machine Translation Translation

CoP: Factual Inconsistency Detection by Controlling the Preference

1 code implementation3 Dec 2022 Shuaijie She, Xiang Geng, ShuJian Huang, Jiajun Chen

To separate the preference for factual consistency, we propose an unsupervised framework named CoP by controlling the preference of the generation model with the help of prompt.

Abstractive Text Summarization

DirectQE: Direct Pretraining for Machine Translation Quality Estimation

no code implementations15 May 2021 Qu Cui, ShuJian Huang, Jiahuan Li, Xiang Geng, Zaixiang Zheng, Guoping Huang, Jiajun Chen

However, we argue that there are gaps between the predictor and the estimator in both data quality and training objectives, which preclude QE models from benefiting from a large number of parallel corpora more directly.

Machine Translation Translation

Optimizing Large-Scale Hyperparameters via Automated Learning Algorithm

1 code implementation17 Feb 2021 Bin Gu, Guodong Liu, yanfu Zhang, Xiang Geng, Heng Huang

Modern machine learning algorithms usually involve tuning multiple (from one to thousands) hyperparameters which play a pivotal role in terms of model generalizability.

Hyperparameter Optimization

Quadruply Stochastic Gradients for Large Scale Nonlinear Semi-Supervised AUC Optimization

no code implementations29 Jul 2019 Wanli Shi, Bin Gu, Xiang Li, Xiang Geng, Heng Huang

To address this problem, in this paper, we propose a novel scalable quadruply stochastic gradient algorithm (QSG-S2AUC) for nonlinear semi-supervised AUC optimization.

Stochastic Optimization

Scalable Semi-Supervised SVM via Triply Stochastic Gradients

no code implementations26 Jul 2019 Xiang Geng, Bin Gu, Xiang Li, Wanli Shi, Guansheng Zheng, Heng Huang

Specifically, to handle two types of data instances involved in S$^3$VM, TSGS$^3$VM samples a labeled instance and an unlabeled instance as well with the random features in each iteration to compute a triply stochastic gradient.

Cannot find the paper you are looking for? You can Submit a new open access paper.