Search Results for author: Liangyou Li

Found 46 papers, 8 papers with code

SongRewriter: A Chinese Song Rewriting System with Controllable Content and Rhyme Scheme

1 code implementation28 Nov 2022 Yusen Sun, Liangyou Li, Qun Liu, Dit-yan Yeung

Although lyrics generation has achieved significant progress in recent years, it has limited practical applications because the generated lyrics cannot be performed without composing compatible melodies.

Aligning Large Language Models with Human: A Survey

1 code implementation24 Jul 2023 YuFei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu

(2) Training methodologies: a detailed review of the prevailing training methods employed for LLM alignment.

FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models

1 code implementation31 Oct 2023 Yuxin Jiang, YuFei Wang, Xingshan Zeng, Wanjun Zhong, Liangyou Li, Fei Mi, Lifeng Shang, Xin Jiang, Qun Liu, Wei Wang

To fill this research gap, in this paper, we propose FollowBench, a Multi-level Fine-grained Constraints Following Benchmark for LLMs.

Instruction Following

Learning to Edit: Aligning LLMs with Knowledge Editing

1 code implementation19 Feb 2024 Yuxin Jiang, YuFei Wang, Chuhan Wu, Wanjun Zhong, Xingshan Zeng, Jiahui Gao, Liangyou Li, Xin Jiang, Lifeng Shang, Ruiming Tang, Qun Liu, Wei Wang

Knowledge editing techniques, aiming to efficiently modify a minor proportion of knowledge in large language models (LLMs) without negatively impacting performance across other inputs, have garnered widespread attention.

knowledge editing Philosophy

MT-Eval: A Multi-Turn Capabilities Evaluation Benchmark for Large Language Models

1 code implementation30 Jan 2024 Wai-Chung Kwan, Xingshan Zeng, Yuxin Jiang, YuFei Wang, Liangyou Li, Lifeng Shang, Xin Jiang, Qun Liu, Kam-Fai Wong

Large language models (LLMs) are increasingly relied upon for complex multi-turn conversations across diverse real-world applications.

Exploring the Vulnerability of Deep Neural Networks: A Study of Parameter Corruption

1 code implementation10 Jun 2020 Xu Sun, Zhiyuan Zhang, Xuancheng Ren, Ruixuan Luo, Liangyou Li

We argue that the vulnerability of model parameters is of crucial value to the study of model robustness and generalization but little research has been devoted to understanding this matter.

Context-Aware Graph Segmentation for Graph-Based Translation

no code implementations EACL 2017 Liangyou Li, Andy Way, Qun Liu

In this paper, we present an improved graph-based translation model which segments an input graph into node-induced subgraphs by taking source context into consideration.

Segmentation Translation

Topic-Informed Neural Machine Translation

no code implementations COLING 2016 Jian Zhang, Liangyou Li, Andy Way, Qun Liu

In recent years, neural machine translation (NMT) has demonstrated state-of-the-art machine translation (MT) performance.

Machine Translation NMT +2

Semantics-Enhanced Task-Oriented Dialogue Translation: A Case Study on Hotel Booking

no code implementations IJCNLP 2017 Long-Yue Wang, Jinhua Du, Liangyou Li, Zhaopeng Tu, Andy Way, Qun Liu

We showcase TODAY, a semantics-enhanced task-oriented dialogue translation system, whose novelties are: (i) task-oriented named entity (NE) definition and a hybrid strategy for NE recognition and translation; and (ii) a novel grounded semantic method for dialogue understanding and task-order management.

Dialogue Understanding Machine Translation +3

Huawei's NMT Systems for the WMT 2019 Biomedical Translation Task

no code implementations WS 2019 Wei Peng, Jianfeng Liu, Liangyou Li, Qun Liu

This paper describes Huawei{'}s neural machine translation systems for the WMT 2019 biomedical translation shared task.

Domain Adaptation Machine Translation +3

A General Framework for Adaptation of Neural Machine Translation to Simultaneous Translation

no code implementations Asian Chapter of the Association for Computational Linguistics 2020 Yun Chen, Liangyou Li, Xin Jiang, Xiao Chen, Qun Liu

Despite the success of neural machine translation (NMT), simultaneous neural machine translation (SNMT), the task of translating in real time before a full sentence has been observed, remains challenging due to the syntactic structure difference and simultaneity requirements.

Machine Translation NMT +2

Pretrained Language Models for Document-Level Neural Machine Translation

no code implementations8 Nov 2019 Liangyou Li, Xin Jiang, Qun Liu

Previous work on document-level NMT usually focuses on limited contexts because of degraded performance on larger contexts.

Machine Translation NMT +2

Document Graph for Neural Machine Translation

no code implementations EMNLP 2021 Mingzhou Xu, Liangyou Li, Derek. F. Wong, Qun Liu, Lidia S. Chao

Previous works have shown that contextual information can improve the performance of neural machine translation (NMT).

Machine Translation NMT +1

Future-Guided Incremental Transformer for Simultaneous Translation

no code implementations23 Dec 2020 Shaolei Zhang, Yang Feng, Liangyou Li

Simultaneous translation (ST) starts translations synchronously while reading source sentences, and is used in many online scenarios.

Knowledge Distillation Translation

Dependency Graph-to-String Statistical Machine Translation

no code implementations20 Mar 2021 Liangyou Li, Andy Way, Qun Liu

We present graph-based translation models which translate source graphs into target strings.

Machine Translation Translation

An Approach to Improve Robustness of NLP Systems against ASR Errors

no code implementations25 Mar 2021 Tong Cui, Jinghui Xiao, Liangyou Li, Xin Jiang, Qun Liu

Speech-enabled systems typically first convert audio to text through an automatic speech recognition (ASR) model and then feed the text to downstream natural language processing (NLP) modules.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +5

Multilingual Speech Translation with Unified Transformer: Huawei Noah's Ark Lab at IWSLT 2021

no code implementations1 Jun 2021 Xingshan Zeng, Liangyou Li, Qun Liu

We use a unified transformer architecture for our MultiST model, so that the data from different modalities (i. e., speech and text) and different tasks (i. e., Speech Recognition, Machine Translation, and Speech Translation) can be exploited to enhance the model's ability.

Data Augmentation Machine Translation +4

Learning Multilingual Representation for Natural Language Understanding with Enhanced Cross-Lingual Supervision

no code implementations9 Jun 2021 Yinpeng Guo, Liangyou Li, Xin Jiang, Qun Liu

Recently, pre-training multilingual language models has shown great potential in learning multilingual representation, a crucial topic of natural language processing.

Natural Language Understanding

RealTranS: End-to-End Simultaneous Speech Translation with Convolutional Weighted-Shrinking Transformer

no code implementations Findings (ACL) 2021 Xingshan Zeng, Liangyou Li, Qun Liu

To bridge the modality gap between speech and text, RealTranS gradually downsamples the input speech with interleaved convolution and unidirectional Transformer layers for acoustic modeling, and then maps speech features into text space with a weighted-shrinking operation and a semantic encoder.

Translation

Uncertainty-Aware Balancing for Multilingual and Multi-Domain Neural Machine Translation Training

no code implementations EMNLP 2021 Minghao Wu, Yitong Li, Meng Zhang, Liangyou Li, Gholamreza Haffari, Qun Liu

In this work, we propose an approach, MultiUAT, that dynamically adjusts the training data usage based on the model's uncertainty on a small set of trusted clean data for multi-corpus machine translation.

Machine Translation Translation

Adversarial Parameter Defense by Multi-Step Risk Minimization

no code implementations7 Sep 2021 Zhiyuan Zhang, Ruixuan Luo, Xuancheng Ren, Qi Su, Liangyou Li, Xu sun

To enhance neural networks, we propose the adversarial parameter defense algorithm that minimizes the average risk of multiple adversarial parameter corruptions.

Multilingual Speech Translation with Unified Transformer: Huawei Noah’s Ark Lab at IWSLT 2021

no code implementations ACL (IWSLT) 2021 Xingshan Zeng, Liangyou Li, Qun Liu

We use a unified transformer architecture for our MultiST model, so that the data from different modalities (i. e., speech and text) and different tasks (i. e., Speech Recognition, Machine Translation, and Speech Translation) can be exploited to enhance the model’s ability.

Data Augmentation Machine Translation +4

Triangular Transfer: Freezing the Pivot for Triangular Machine Translation

no code implementations ACL 2022 Meng Zhang, Liangyou Li, Qun Liu

Triangular machine translation is a special case of low-resource machine translation where the language pair of interest has limited parallel data, but both languages have abundant parallel data with a pivot language.

Language Modelling Machine Translation +2

FreeTransfer-X: Safe and Label-Free Cross-Lingual Transfer from Off-the-Shelf Models

no code implementations Findings (NAACL) 2022 Yinpeng Guo, Liangyou Li, Xin Jiang, Qun Liu

However, labeled cross-lingual corpus is expensive or even inaccessible, especially in the fields where labels are private, such as diagnostic results of symptoms in medicine and user profiles in business.

Cross-Lingual Transfer Knowledge Distillation +3

End-to-End Simultaneous Speech Translation with Pretraining and Distillation: Huawei Noah’s System for AutoSimTranS 2022

no code implementations NAACL (AutoSimTrans) 2022 Xingshan Zeng, Pengfei Li, Liangyou Li, Qun Liu

This paper describes the system submitted to AutoSimTrans 2022 from Huawei Noah’s Ark Lab, which won the first place in the audio input track of the Chinese-English translation task.

Knowledge Distillation NMT +1

AdaTranS: Adapting with Boundary-based Shrinking for End-to-End Speech Translation

no code implementations17 Dec 2022 Xingshan Zeng, Liangyou Li, Qun Liu

To alleviate the data scarcity problem in End-to-end speech translation (ST), pre-training on data for speech recognition and machine translation is considered as an important technique.

Machine Translation speech-recognition +2

Evaluating the Efficacy of Length-Controllable Machine Translation

no code implementations3 May 2023 Hao Cheng, Meng Zhang, Weixuan Wang, Liangyou Li, Qun Liu, Zhihua Zhang

We can use automatic summarization or machine translation evaluation metrics for length-controllable machine translation, but this is not necessarily suitable and accurate.

Machine Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.