Search Results for author: Jiaxin Guo

Found 47 papers, 6 papers with code

HW-TSC’s Submissions to the WMT21 Biomedical Translation Task

no code implementations WMT (EMNLP) 2021 Hao Yang, Zhanglin Wu, Zhengzhe Yu, Xiaoyu Chen, Daimeng Wei, Zongyao Li, Hengchao Shang, Minghan Wang, Jiaxin Guo, Lizhi Lei, Chuanfei Xu, Min Zhang, Ying Qin

This paper describes the submission of Huawei Translation Service Center (HW-TSC) to WMT21 biomedical translation task in two language pairs: Chinese↔English and German↔English (Our registered team name is HuaweiTSC).

Translation

Make the Blind Translator See The World: A Novel Transfer Learning Solution for Multimodal Machine Translation

no code implementations MTSummit 2021 Minghan Wang, Jiaxin Guo, Yimeng Chen, Chang Su, Min Zhang, Shimin Tao, Hao Yang

Based on large-scale pretrained networks and the liability to be easily overfitting with limited labelled training data of multimodal translation (MMT) is a critical issue in MMT.

Multimodal Machine Translation NMT +2

Automatic Evaluation Metrics for Document-level Translation: Overview, Challenges and Trends

no code implementations21 Apr 2025 Jiaxin Guo, Xiaoyu Chen, Zhiqiang Rao, Jinlong Yang, Zongyao Li, Hengchao Shang, Daimeng Wei, Hao Yang

With the rapid development of deep learning technologies, the field of machine translation has witnessed significant progress, especially with the advent of large language models (LLMs) that have greatly propelled the advancement of document-level translation.

Machine Translation Sentence +1

Endo3R: Unified Online Reconstruction from Dynamic Monocular Endoscopic Video

no code implementations4 Apr 2025 Jiaxin Guo, Wenzhen Dong, Tianyu Huang, Hao Ding, Ziyi Wang, Haomin Kuang, Qi Dou, Yun-hui Liu

The core contribution of our method is expanding the capability of the recent pairwise reconstruction model to long-term incremental dynamic reconstruction by an uncertainty-aware dual memory mechanism.

Camera Pose Estimation Depth Estimation +3

Chain-of-Description: What I can understand, I can put into words

no code implementations22 Feb 2025 Jiaxin Guo, Daimeng Wei, Zongyao Li, Hengchao Shang, Yuanchang Luo, Hao Yang

In this paper, we propose a novel strategy defined as Chain-of-Description (CoD) Prompting, tailored for Multi-Modal Large Language Models.

Doc-Guided Sent2Sent++: A Sent2Sent++ Agent with Doc-Guided memory for Document-level Machine Translation

no code implementations15 Jan 2025 Jiaxin Guo, Yuanchang Luo, Daimeng Wei, Ling Zhang, Zongyao Li, Hengchao Shang, Zhiqiang Rao, Shaojun Li, Jinlong Yang, Zhanglin Wu, Hao Yang

The contributions of this paper include a detailed analysis of current DocMT research, the introduction of the Sent2Sent++ decoding method, the Doc-Guided Memory mechanism, and validation of its effectiveness across languages and domains.

Document Level Machine Translation Machine Translation +2

M-Ped: Multi-Prompt Ensemble Decoding for Large Language Models

no code implementations24 Dec 2024 Jiaxin Guo, Daimeng Wei, Yuanchang Luo, Shimin Tao, Hengchao Shang, Zongyao Li, Shaojun Li, Jinlong Yang, Zhanglin Wu, Zhiqiang Rao, Hao Yang

Given a unique input $X$, we submit $n$ variations of prompts with $X$ to LLMs in batch mode to decode and derive probability distributions.

Code Generation Machine Translation +1

Context-aware and Style-related Incremental Decoding framework for Discourse-Level Literary Translation

no code implementations25 Sep 2024 Yuanchang Luo, Jiaxin Guo, Daimeng Wei, Hengchao Shang, Zongyao Li, Zhanglin Wu, Zhiqiang Rao, Shaojun Li, Jinlong Yang, Hao Yang

This report outlines our approach for the WMT24 Discourse-Level Literary Translation Task, focusing on the Chinese-English language pair in the Constrained Track.

Sentence Translation

Exploring the traditional NMT model and Large Language Model for chat translation

no code implementations24 Sep 2024 Jinlong Yang, Hengchao Shang, Daimeng Wei, Jiaxin Guo, Zongyao Li, Zhanglin Wu, Zhiqiang Rao, Shaojun Li, Yuhao Xie, Yuanchang Luo, Jiawei Zheng, Bin Wei, Hao Yang

This paper describes the submissions of Huawei Translation Services Center(HW-TSC) to WMT24 chat translation shared task on English$\leftrightarrow$Germany (en-de) bidirection.

Language Modeling Language Modelling +4

Multilingual Transfer and Domain Adaptation for Low-Resource Languages of Spain

no code implementations24 Sep 2024 Yuanchang Luo, Zhanglin Wu, Daimeng Wei, Hengchao Shang, Zongyao Li, Jiaxin Guo, Zhiqiang Rao, Shaojun Li, Jinlong Yang, Yuhao Xie, Jiawei Zheng Bin Wei, Hao Yang

This article introduces the submission status of the Translation into Low-Resource Languages of Spain task at (WMT 2024) by Huawei Translation Service Center (HW-TSC).

Denoising Domain Adaptation +4

HW-TSC's Submission to the CCMT 2024 Machine Translation Tasks

no code implementations23 Sep 2024 Zhanglin Wu, Yuanchang Luo, Daimeng Wei, Jiawei Zheng, Bin Wei, Zongyao Li, Hengchao Shang, Jiaxin Guo, Shaojun Li, Weidong Zhang, Ning Xie, Hao Yang

This paper presents the submission of Huawei Translation Services Center (HW-TSC) to machine translation tasks of the 20th China Conference on Machine Translation (CCMT 2024).

Automatic Post-Editing Ensemble Learning +5

Choose the Final Translation from NMT and LLM hypotheses Using MBR Decoding: HW-TSC's Submission to the WMT24 General MT Shared Task

no code implementations23 Sep 2024 Zhanglin Wu, Daimeng Wei, Zongyao Li, Hengchao Shang, Jiaxin Guo, Shaojun Li, Zhiqiang Rao, Yuanchang Luo, Ning Xie, Hao Yang

This paper presents the submission of Huawei Translate Services Center (HW-TSC) to the WMT24 general machine translation (MT) shared task, where we participate in the English to Chinese (en2zh) language pair.

Ensemble Learning Language Modeling +5

LA-RAG:Enhancing LLM-based ASR Accuracy with Retrieval-Augmented Generation

no code implementations13 Sep 2024 Shaojun Li, Hengchao Shang, Daimeng Wei, Jiaxin Guo, Zongyao Li, Xianghui He, Min Zhang, Hao Yang

Recent advancements in integrating speech information into large language models (LLMs) have significantly improved automatic speech recognition (ASR) accuracy.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

UC-NeRF: Uncertainty-aware Conditional Neural Radiance Fields from Endoscopic Sparse Views

1 code implementation4 Sep 2024 Jiaxin Guo, Jiangliu Wang, Ruofeng Wei, Di Kang, Qi Dou, Yun-hui Liu

In neural rendering, we design a base-adaptive NeRF network to exploit the uncertainty estimation for explicitly handling the photometric inconsistencies.

NeRF Neural Rendering +1

Free-SurGS: SfM-Free 3D Gaussian Splatting for Surgical Scene Reconstruction

1 code implementation3 Jul 2024 Jiaxin Guo, Jiangliu Wang, Di Kang, Wenzhen Dong, Wenting Wang, Yun-hui Liu

To tackle this problem, in this paper, we propose the first SfM-free 3DGS-based method for surgical scene reconstruction by jointly optimizing the camera poses and scene representation.

3DGS 3D Reconstruction +3

An End-to-End Speech Summarization Using Large Language Model

no code implementations2 Jul 2024 Hengchao Shang, Zongyao Li, Jiaxin Guo, Shaojun Li, Zhiqiang Rao, Yuanchang Luo, Daimeng Wei, Hao Yang

In this paper, we propose an end-to-end SSum model that utilizes Q-Former as a connector for the audio-text modality and employs LLMs to generate text summaries directly from speech features.

Language Modeling Language Modelling +2

Why Not Transform Chat Large Language Models to Non-English?

1 code implementation22 May 2024 Xiang Geng, Ming Zhu, Jiahuan Li, Zhejian Lai, Wei Zou, Shuaijie She, Jiaxin Guo, Xiaofeng Zhao, Yinglu Li, Yuang Li, Chang Su, Yanqing Zhao, Xinglin Lyu, Min Zhang, Jiajun Chen, Hao Yang, ShuJian Huang

For the second issue, we propose a method comprising two synergistic components: low-rank adaptation for training to maintain the original LLM parameters, and recovery KD, which utilizes data generated by the chat LLM itself to recover the original knowledge from the frozen parameters.

Knowledge Distillation

A Novel Paradigm Boosting Translation Capabilities of Large Language Models

no code implementations18 Mar 2024 Jiaxin Guo, Hao Yang, Zongyao Li, Daimeng Wei, Hengchao Shang, Xiaoyu Chen

Experimental results conducted using the Llama2 model, particularly on Chinese-Llama2 after monolingual augmentation, demonstrate the improved translation capabilities of LLMs.

Machine Translation Translation

Text Style Transfer Back-Translation

1 code implementation2 Jun 2023 Daimeng Wei, Zhanglin Wu, Hengchao Shang, Zongyao Li, Minghan Wang, Jiaxin Guo, Xiaoyu Chen, Zhengzhe Yu, Hao Yang

To address this issue, we propose Text Style Transfer Back Translation (TST BT), which uses a style transfer model to modify the source side of BT data.

Data Augmentation Domain Adaptation +4

A Visual Navigation Perspective for Category-Level Object Pose Estimation

1 code implementation25 Mar 2022 Jiaxin Guo, Fangxun Zhong, Rong Xiong, Yunhui Liu, Yue Wang, Yiyi Liao

In this paper, we take a deeper look at the inference of analysis-by-synthesis from the perspective of visual navigation, and investigate what is a good navigation policy for this specific task.

Imitation Learning Pose Estimation +1

Self-Distillation Mixup Training for Non-autoregressive Neural Machine Translation

no code implementations22 Dec 2021 Jiaxin Guo, Minghan Wang, Daimeng Wei, Hengchao Shang, Yuxia Wang, Zongyao Li, Zhengzhe Yu, Zhanglin Wu, Yimeng Chen, Chang Su, Min Zhang, Lizhi Lei, Shimin Tao, Hao Yang

An effective training strategy to improve the performance of AT models is Self-Distillation Mixup (SDM) Training, which pre-trains a model on raw data, generates distilled data by the pre-trained model itself and finally re-trains a model on the combination of raw data and distilled data.

Knowledge Distillation Machine Translation +1

Joint-training on Symbiosis Networks for Deep Nueral Machine Translation models

no code implementations22 Dec 2021 Zhengzhe Yu, Jiaxin Guo, Minghan Wang, Daimeng Wei, Hengchao Shang, Zongyao Li, Zhanglin Wu, Yuxia Wang, Yimeng Chen, Chang Su, Min Zhang, Lizhi Lei, Shimin Tao, Hao Yang

Deep encoders have been proven to be effective in improving neural machine translation (NMT) systems, but it reaches the upper bound of translation quality when the number of encoder layers exceeds 18.

de-en Machine Translation +2

Asymptotic in a class of network models with an increasing sub-Gamma degree sequence

no code implementations2 Nov 2021 Jing Luo, Haoyu Wei, Xiaoyu Lei, Jiaxin Guo

For the differential privacy under the sub-Gamma noise, we derive the asymptotic properties of a class of network models with binary values with a general link function.

3D Point-to-Keypoint Voting Network for 6D Pose Estimation

no code implementations22 Dec 2020 Weitong Hua, Jiaxin Guo, Yue Wang, Rong Xiong

In this paper, we propose a framework for 6D pose estimation from RGB-D data based on spatial structure characteristics of 3D keypoints.

6D Pose Estimation

PREGAN: Pose Randomization and Estimation for Weakly Paired Image Style Translation

1 code implementation31 Oct 2020 Zexi Chen, Jiaxin Guo, Xuecheng Xu, Yunkai Wang, Yue Wang, Rong Xiong

Utilizing the trained model under different conditions without data annotation is attractive for robot applications.

object-detection Object Detection +4

Cannot find the paper you are looking for? You can Submit a new open access paper.