no code implementations • EMNLP 2021 • Haoran Xu, Hainan Zhang, Yanyan Zou, Hongshen Chen, Zhuoye Ding, Yanyan Lan
Although exposure bias has been widely studied in some NLP tasks, it faces its unique challenges in dialogue response generation, the representative one-to-various generation scenario. In real human dialogue, there are many appropriate responses for the same context, not only with different expressions, but also with different topics.
no code implementations • 4 Nov 2023 • Weiting Tan, Haoran Xu, Lingfeng Shen, Shuyue Stella Li, Kenton Murray, Philipp Koehn, Benjamin Van Durme, Yunmo Chen
Large language models trained primarily in a monolingual setting have demonstrated their ability to generalize to machine translation using zero- and few-shot examples with in-context learning.
no code implementations • 2 Oct 2023 • Tianjian Li, Haoran Xu, Philipp Koehn, Daniel Khashabi, Kenton Murray
Text generation models are notoriously vulnerable to errors in the training data.
1 code implementation • 20 Sep 2023 • Haoran Xu, Young Jin Kim, Amr Sharaf, Hany Hassan Awadalla
In this study, we propose a novel fine-tuning approach for LLMs that is specifically designed for the translation task, eliminating the need for the abundant parallel data that traditional translation models usually depend on.
1 code implementation • NeurIPS 2023 • Xiangsen Wang, Haoran Xu, Yinan Zheng, Xianyuan Zhan
Offline reinforcement learning (RL) has received considerable attention in recent years due to its attractive capability of learning policies from offline datasets without environmental interactions.
no code implementations • 6 Jul 2023 • Li Jiang, Sijie Chen, JieLin Qiu, Haoran Xu, Wai Kin Chan, Zhao Ding
The prevalent use of benchmarks in current offline reinforcement learning (RL) research has led to a neglect of the imbalance of real-world dataset distributions in the development of models.
1 code implementation • 25 May 2023 • Jianxiong Li, Xiao Hu, Haoran Xu, Jingjing Liu, Xianyuan Zhan, Ya-Qin Zhang
Offline-to-online reinforcement learning (RL), by combining the benefits of offline pretraining and online finetuning, promises enhanced sample efficiency and policy performance.
1 code implementation • 23 May 2023 • Haoran Xu, Weiting Tan, Shuyue Stella Li, Yunmo Chen, Benjamin Van Durme, Philipp Koehn, Kenton Murray
Incorporating language-specific (LS) modules is a proven method to boost performance in multilingual machine translation.
1 code implementation • 3 May 2023 • Haoran Xu, Maha Elbayad, Kenton Murray, Jean Maillard, Vedanuj Goswami
Mixture-of-experts (MoE) models that employ sparse activation have demonstrated effectiveness in significantly increasing the number of parameters while maintaining low computational requirements per token.
2 code implementations • 28 Mar 2023 • Haoran Xu, Li Jiang, Jianxiong Li, Zhuoran Yang, Zhaoran Wang, Victor Wai Kin Chan, Xianyuan Zhan
This gives a deeper understanding of why the in-sample learning paradigm works, i. e., it applies implicit value regularization to the policy.
1 code implementation • 10 Feb 2023 • Haoran Xu, Jean Maillard, Vedanuj Goswami
In this work, we first investigate how to utilize intra-distillation to learn more *language-specific* parameters and then show the importance of these language-specific parameters.
1 code implementation • 3 Feb 2023 • Jianxiong Li, Xiao Hu, Haoran Xu, Jingjing Liu, Xianyuan Zhan, Qing-Shan Jia, Ya-Qin Zhang
RGM is formulated as a bi-level optimization problem: the upper layer optimizes a reward correction term that performs visitation distribution matching w. r. t.
no code implementations • 28 Jan 2023 • Qin Zhang, Linrui Zhang, Haoran Xu, Li Shen, Bowen Wang, Yongzhe Chang, Xueqian Wang, Bo Yuan, DaCheng Tao
Offline safe RL is of great practical relevance for deploying agents in real-world applications.
no code implementations • ICCV 2023 • Yangru Huang, Peixi Peng, Yifan Zhao, Yunpeng Zhai, Haoran Xu, Yonghong Tian
Efficient motion and appearance modeling are critical for vision-based Reinforcement Learning (RL).
1 code implementation • 15 Oct 2022 • Haoran Xu, Li Jiang, Jianxiong Li, Xianyuan Zhan
We decompose the conventional reward-maximizing policy in offline RL into a guide-policy and an execute-policy.
1 code implementation • 20 Jul 2022 • Haoran Xu, Xianyuan Zhan, Honglei Yin, Huiling Qin
We study the problem of offline Imitation Learning (IL) where an agent aims to learn an optimal expert behavior policy without additional online environment interactions.
no code implementations • 1 Jul 2022 • Wenjia Zhang, Haoran Xu, Haoyi Niu, Peng Cheng, Ming Li, Heming Zhang, Guyue Zhou, Xianyuan Zhan
In this paper, we propose the Discriminator-guided Model-based offline Imitation Learning (DMIL) framework, which introduces a discriminator to simultaneously distinguish the dynamics correctness and suboptimality of model rollout data against real expert demonstrations.
1 code implementation • 23 May 2022 • Haoran Xu, Philipp Koehn, Kenton Murray
We first highlight the large sensitivity (contribution) gap among high-sensitivity and low-sensitivity parameters and show that the model generalization performance can be significantly improved after balancing the contribution of all parameters.
2 code implementations • 23 May 2022 • Jianxiong Li, Xianyuan Zhan, Haoran Xu, Xiangyu Zhu, Jingjing Liu, Ya-Qin Zhang
In offline reinforcement learning (RL), one detrimental issue to policy learning is the error accumulation of deep Q function in out-of-distribution (OOD) areas.
1 code implementation • Findings (NAACL) 2022 • Haoran Xu, Kenton Murray
The current state-of-the-art for few-shot cross-lingual transfer learning first trains on abundant labeled data in the source language and then fine-tunes with a few examples on the target language, termed target-adapting.
1 code implementation • ICON 2021 • Haoran Xu, Sixing Lu, Zhongkai Sun, Chengyuan Ma, Chenlei Guo
Text Style Transfer (TST) aims to alter the underlying style of the source text to another specific style while keeping the same content.
no code implementations • 22 Oct 2021 • Haoran Xu, Hainan Zhang, Yanyan Zou, Hongshen Chen, Zhuoye Ding, Yanyan Lan
Although exposure bias has been widely studied in some NLP tasks, it faces its unique challenges in dialogue response generation, the representative one-to-various generation scenario.
no code implementations • 14 Oct 2021 • Haoran Xu, Xianyuan Zhan, Jianxiong Li, Honglei Yin
In this work, we start from the performance difference between the learned policy and the behavior policy, we derive a new policy learning objective that can be used in the offline setting, which corresponds to the advantage function value of the behavior policy, multiplying by a state-marginal density ratio.
no code implementations • 29 Sep 2021 • Huiling Qin, Xianyuan Zhan, Yuanxun li, Haoran Xu, Yu Zheng
Jointly solving these two tasks allows full utilization of information from both labeled and unlabeled data, thus alleviating the problem of over-reliance on labeled data.
2 code implementations • EMNLP 2021 • Mahsa Yarmohammadi, Shijie Wu, Marc Marone, Haoran Xu, Seth Ebner, Guanghui Qin, Yunmo Chen, Jialiang Guo, Craig Harman, Kenton Murray, Aaron Steven White, Mark Dredze, Benjamin Van Durme
Zero-shot cross-lingual information extraction (IE) describes the construction of an IE model for some target language, given existing annotations exclusively in some other language, typically English.
2 code implementations • EMNLP 2021 • Haoran Xu, Benjamin Van Durme, Kenton Murray
The success of bidirectional encoders using masked language models, such as BERT, on numerous natural language processing tasks has prompted researchers to attempt to incorporate these pre-trained models into neural machine translation (NMT) systems.
Ranked #2 on
Machine Translation
on IWSLT2014 German-English
no code implementations • 19 Jul 2021 • Haoran Xu, Xianyuan Zhan, Xiangyu Zhu
We study the problem of safe offline reinforcement learning (RL), the goal is to learn a policy that maximizes long-term reward while satisfying safety constraints given only offline data, without further interaction with the environment.
1 code implementation • 19 Jul 2021 • Haoran Xu, Philipp Koehn
Typically, a linearly orthogonal transformation mapping is learned by aligning static type-level embeddings to build a shared semantic space.
1 code implementation • 16 May 2021 • Xianyuan Zhan, Xiangyu Zhu, Haoran Xu
The recent offline reinforcement learning (RL) studies have achieved much progress to make RL usable in real-world systems by learning policies from pre-collected datasets without environment interaction.
1 code implementation • EACL (AdaptNLP) 2021 • Haoran Xu, Philipp Koehn
Linear embedding transformation has been shown to be effective for zero-shot cross-lingual transfer tasks and achieve surprisingly promising results.
2 code implementations • EACL (AdaptNLP) 2021 • Haoran Xu, Seth Ebner, Mahsa Yarmohammadi, Aaron Steven White, Benjamin Van Durme, Kenton Murray
Fine-tuning is known to improve NLP models by adapting an initial model trained on more plentiful but less domain-salient examples to data in a target domain.
no code implementations • 23 Feb 2021 • Xianyuan Zhan, Haoran Xu, Yue Zhang, Xiangyu Zhu, Honglei Yin, Yu Zheng
Optimizing the combustion efficiency of a thermal power generating unit (TPGU) is a highly challenging and critical task in the energy industry.
no code implementations • COLING 2020 • Jianfeng Liu, Ling Luo, Xiang Ao, Yan Song, Haoran Xu, Jian Ye
Multi-source neural machine translation aims to translate from parallel sources of information (e. g. languages, images, etc.)