1 code implementation • 16 Feb 2023 • Meng Cao, Yue Dong, Jingyi He, Jackie Chi Kit Cheung
State-of-the-art abstractive summarization systems frequently hallucinate content that is not supported by the source document, mainly due to noise in the training dataset.
no code implementations • 19 Dec 2022 • Yu Fu, Deyi Xiong, Yue Dong
Thus, we introduce inverse reinforcement learning into text summarization and define a suite of sub-rewards that are important for summarization optimization.
no code implementations • NAACL (ACL) 2022 • Eric Malmi, Yue Dong, Jonathan Mallinson, Aleksandr Chuklin, Jakub Adamek, Daniil Mirylenka, Felix Stahlberg, Sebastian Krause, Shankar Kumar, Aliaksei Severyn
Text-editing models have recently become a prominent alternative to seq2seq models for monolingual text-generation tasks such as grammatical error correction, simplification, and style transfer.
no code implementations • 28 Apr 2022 • Yue Dong, John Wieting, Pat Verga
In this work, we show that these entities are not aberrations, but they instead require utilizing external world knowledge to infer reasoning paths from entities in the source.
1 code implementation • ACL 2022 • Meng Cao, Yue Dong, Jackie Chi Kit Cheung
State-of-the-art abstractive summarization systems often generate \emph{hallucinations}; i. e., content that is not directly inferable from the source text.
1 code implementation • ACL 2021 • Rui Meng, Khushboo Thaker, Lei Zhang, Yue Dong, Xingdi Yuan, Tong Wang, Daqing He
Faceted summarization provides briefings of a document from different perspectives.
Ranked #1 on
Unsupervised Extractive Summarization
on FacetSum
1 code implementation • EACL 2021 • Yue Dong, Andrei Mircea, Jackie Chi Kit Cheung
We propose an unsupervised graph-based ranking model for extractive summarization of long scientific documents.
1 code implementation • ICLR 2021 • Shengyu Zhao, Jonathan Cui, Yilun Sheng, Yue Dong, Xiao Liang, Eric I Chang, Yan Xu
To overcome this challenge, we propose a generic new approach that bridges the gap between image-conditional and recent modulated unconditional generative architectures via co-modulation of both conditional and stochastic style representations.
Ranked #2 on
Image Inpainting
on CelebA-HQ
no code implementations • Findings (ACL) 2021 • Yue Dong, Chandra Bhagavatula, Ximing Lu, Jena D. Hwang, Antoine Bosselut, Jackie Chi Kit Cheung, Yejin Choi
Despite considerable advancements with deep neural language models (LMs), neural text generation still suffers from degeneration: the generated text is repetitive, generic, self-contradictory, and often lacks commonsense.
1 code implementation • EMNLP 2020 • Yao Lu, Yue Dong, Laurent Charlin
Multi-document summarization is a challenging task for which there exists little large-scale datasets.
1 code implementation • 25 Oct 2020 • Wenxin Hou, Yue Dong, Bairong Zhuang, Longfei Yang, Jiatong Shi, Takahiro Shinozaki
In this paper, we report a large-scale end-to-end language-independent multilingual model for joint automatic speech recognition (ASR) and language identification (LID).
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+3
1 code implementation • EMNLP 2020 • Meng Cao, Yue Dong, Jiapeng Wu, Jackie Chi Kit Cheung
Experimental results show that our model is able to correct factual errors in summaries generated by other neural summarization models and outperforms previous models on factual consistency evaluation on the CNN/DailyMail dataset.
no code implementations • EMNLP 2020 • Yue Dong, Shuohang Wang, Zhe Gan, Yu Cheng, Jackie Chi Kit Cheung, Jingjing Liu
Pre-trained neural abstractive summarization systems have dominated extractive strategies on news summarization performance, at least in terms of ROUGE.
no code implementations • ECCV 2020 • Xin Wei, Guojun Chen, Yue Dong, Stephen Lin, Xin Tong
With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene, leading to improved realism.
1 code implementation • 1 May 2020 • Yue Dong, Andrei Mircea, Jackie C. K. Cheung
We propose an unsupervised graph-based ranking model for extractive summarization of long scientific documents.
Ranked #1 on
Unsupervised Extractive Summarization
on Pubmed
Extractive Summarization
Unsupervised Extractive Summarization
3 code implementations • CVPR 2020 • Shengyu Zhao, Yilun Sheng, Yue Dong, Eric I-Chao Chang, Yan Xu
In this paper, we propose an asymmetric occlusion-aware feature matching module, which can learn a rough occlusion mask that filters useless (occluded) areas immediately after feature warping without any explicit supervision.
Ranked #1 on
Optical Flow Estimation
on KITTI 2012
no code implementations • IJCNLP 2019 • Matt Grenander, Yue Dong, Jackie Chi Kit Cheung, Annie Louis
Sentence position is a strong feature for news summarization, since the lead often (but not always) summarizes the key points of the article.
5 code implementations • ICCV 2019 • Shengyu Zhao, Yue Dong, Eric I-Chao Chang, Yan Xu
We present recursive cascaded networks, a general architecture that enables learning deep cascades, for deformable image registration.
1 code implementation • ACL 2019 • Yue Dong, Zichao Li, Mehdi Rezagholizadeh, Jackie Chi Kit Cheung
We present the first sentence simplification model that learns explicit edit operations (ADD, DELETE, and KEEP) via a neural programmer-interpreter approach.
Ranked #2 on
Text Simplification
on PWKP / WikiSmall
(SARI metric)
no code implementations • CVPR 2019 • Xiao Li, Yue Dong, Pieter Peers, Xin Tong
Key to our method is a novel multi-projection generative adversarial network (MP-GAN) that trains a 3D shape generator to be consistent with multiple 2D projections of the 3D shapes, and without direct access to these 3D shapes.
no code implementations • 26 Nov 2018 • Pengfei Liu, Jie Fu, Yue Dong, Xipeng Qiu, Jackie Chi Kit Cheung
We present two architectures for multi-task learning with neural sequence models.
1 code implementation • EMNLP 2018 • Koustuv Sinha, Yue Dong, Jackie Chi Kit Cheung, Derek Ruths
Deep neural networks have been displaying superior performance over traditional supervised classifiers in text classification.
1 code implementation • EMNLP 2018 • Yue Dong, Yikang Shen, Eric Crawford, Herke van Hoof, Jackie Chi Kit Cheung
In this work, we propose a novel method for training neural networks to perform single-document extractive summarization without heuristically-generated extractive labels.
Ranked #10 on
Extractive Text Summarization
on CNN / Daily Mail
no code implementations • 19 Mar 2018 • Yue Dong
Automatic text summarization, the automated process of shortening a text while reserving the main ideas of the document(s), is a critical research area in natural language processing.
1 code implementation • CVPR 2017 • Jian Shi, Yue Dong, Hao Su, Stella X. Yu
Rendered with realistic environment maps, millions of synthetic images of objects and their corresponding albedo, shading, and specular ground-truth images are used to train an encoder-decoder CNN.