Search Results for author: Zi-Yi Dou

Found 25 papers, 14 papers with code

Improving Pre-trained Vision-and-Language Embeddings for Phrase Grounding

no code implementations EMNLP 2021 Zi-Yi Dou, Nanyun Peng

Phrase grounding aims to map textual phrases to their associated image regions, which can be a prerequisite for multimodal reasoning and can benefit tasks requiring identifying objects based on language.

Phrase Grounding

FOAM: A Follower-aware Speaker Model For Vision-and-Language Navigation

1 code implementation9 Jun 2022 Zi-Yi Dou, Nanyun Peng

The speaker-follower models have proven to be effective in vision-and-language navigation, where a speaker model is used to synthesize new instructions to augment the training data for a follower navigation model.

Vision and Language Navigation

Zero-shot Commonsense Question Answering with Cloze Translation and Consistency Optimization

1 code implementation1 Jan 2022 Zi-Yi Dou, Nanyun Peng

In this paper, we instead focus on better utilizing the \textit{implicit knowledge} stored in pre-trained language models.

Question Answering Relation Extraction +2

RefSum: Refactoring Neural Summarization

1 code implementation NAACL 2021 Yixin Liu, Zi-Yi Dou, PengFei Liu

Although some recent works show potential complementarity among different state-of-the-art systems, few works try to investigate this problem in text summarization.

Text Summarization

ExplainaBoard: An Explainable Leaderboard for NLP

1 code implementation ACL 2021 PengFei Liu, Jinlan Fu, Yang Xiao, Weizhe Yuan, Shuaicheng Chang, Junqi Dai, Yixin Liu, Zihuiwen Ye, Zi-Yi Dou, Graham Neubig

In this paper, we present a new conceptualization and implementation of NLP evaluation: the ExplainaBoard, which in addition to inheriting the functionality of the standard leaderboard, also allows researchers to (i) diagnose strengths and weaknesses of a single system (e. g.~what is the best-performing system bad at?)

Machine Translation

Word Alignment by Fine-tuning Embeddings on Parallel Corpora

2 code implementations EACL 2021 Zi-Yi Dou, Graham Neubig

In addition, we demonstrate that we are able to train multilingual word aligners that can obtain robust performance on different language pairs.

Cross-Lingual Transfer Translation +2

GSum: A General Framework for Guided Neural Abstractive Summarization

1 code implementation NAACL 2021 Zi-Yi Dou, PengFei Liu, Hiroaki Hayashi, Zhengbao Jiang, Graham Neubig

Neural abstractive summarization models are flexible and can produce coherent summaries, but they are sometimes unfaithful and can be difficult to control.

Abstractive Text Summarization

CDEvalSumm: An Empirical Study of Cross-Dataset Evaluation for Neural Summarization Systems

2 code implementations Findings of the Association for Computational Linguistics 2020 Yiran Chen, PengFei Liu, Ming Zhong, Zi-Yi Dou, Danqing Wang, Xipeng Qiu, Xuanjing Huang

In this paper, we perform an in-depth analysis of characteristics of different datasets and investigate the performance of different summarization models under a cross-dataset setting, in which a summarizer trained on one corpus will be evaluated on a range of out-of-domain corpora.

Text Summarization

A Deep Reinforced Model for Zero-Shot Cross-Lingual Summarization with Bilingual Semantic Similarity Rewards

1 code implementation WS 2020 Zi-Yi Dou, Sachin Kumar, Yulia Tsvetkov

The model uses reinforcement learning to directly optimize a bilingual semantic similarity metric between the summaries generated in a target language and gold summaries in a source language.

Machine Translation reinforcement-learning +4

Dynamic Data Selection and Weighting for Iterative Back-Translation

1 code implementation EMNLP 2020 Zi-Yi Dou, Antonios Anastasopoulos, Graham Neubig

Back-translation has proven to be an effective method to utilize monolingual data in neural machine translation (NMT), and iteratively conducting back-translation can further improve the model performance.

Domain Adaptation Machine Translation +1

Information Aggregation for Multi-Head Attention with Routing-by-Agreement

no code implementations NAACL 2019 Jian Li, Baosong Yang, Zi-Yi Dou, Xing Wang, Michael R. Lyu, Zhaopeng Tu

Multi-head attention is appealing for its ability to jointly extract different types of information from multiple representation subspaces.

Machine Translation Translation

compare-mt: A Tool for Holistic Comparison of Language Generation Systems

2 code implementations NAACL 2019 Graham Neubig, Zi-Yi Dou, Junjie Hu, Paul Michel, Danish Pruthi, Xinyi Wang, John Wieting

In this paper, we describe compare-mt, a tool for holistic analysis and comparison of the results of systems for language generation tasks such as machine translation.

Machine Translation Text Generation +1

Dynamic Layer Aggregation for Neural Machine Translation with Routing-by-Agreement

no code implementations15 Feb 2019 Zi-Yi Dou, Zhaopeng Tu, Xing Wang, Long-Yue Wang, Shuming Shi, Tong Zhang

With the promising progress of deep neural networks, layer aggregation has been used to fuse information across layers in various fields, such as computer vision and machine translation.

Machine Translation Translation

Exploiting Deep Representations for Neural Machine Translation

no code implementations EMNLP 2018 Zi-Yi Dou, Zhaopeng Tu, Xing Wang, Shuming Shi, Tong Zhang

Advanced neural machine translation (NMT) models generally implement encoder and decoder as multiple layers, which allows systems to model complex functions and capture complicated linguistic structures.

Machine Translation Translation

Unsupervised Bilingual Lexicon Induction via Latent Variable Models

no code implementations EMNLP 2018 Zi-Yi Dou, Zhi-Hao Zhou, Shu-Jian Huang

Bilingual lexicon extraction has been studied for decades and most previous methods have relied on parallel corpora or bilingual dictionaries.

Bilingual Lexicon Induction Word Embeddings

SkipNet: Learning Dynamic Routing in Convolutional Networks

2 code implementations ECCV 2018 Xin Wang, Fisher Yu, Zi-Yi Dou, Trevor Darrell, Joseph E. Gonzalez

While deeper convolutional networks are needed to achieve maximum accuracy in visual perception tasks, for many inputs shallower networks are sufficient.

Decision Making reinforcement-learning

Metric Learning-based Generative Adversarial Network

no code implementations8 Nov 2017 Zi-Yi Dou

Generative Adversarial Networks (GANs), as a framework for estimating generative models via an adversarial process, have attracted huge attention and have proven to be powerful in a variety of tasks.

Metric Learning

Dynamic Oracle for Neural Machine Translation in Decoding Phase

no code implementations LREC 2018 Zi-Yi Dou, Hao Zhou, Shu-Jian Huang, Xin-yu Dai, Jia-Jun Chen

However, there are certain limitations in Scheduled Sampling and we propose two dynamic oracle-based methods to improve it.

Machine Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.