Search Results for author: Yue Dong

Found 25 papers, 15 papers with code

Learning with Rejection for Abstractive Text Summarization

1 code implementation16 Feb 2023 Meng Cao, Yue Dong, Jingyi He, Jackie Chi Kit Cheung

State-of-the-art abstractive summarization systems frequently hallucinate content that is not supported by the source document, mainly due to noise in the training dataset.

Abstractive Text Summarization

Inverse Reinforcement Learning for Text Summarization

no code implementations19 Dec 2022 Yu Fu, Deyi Xiong, Yue Dong

Thus, we introduce inverse reinforcement learning into text summarization and define a suite of sub-rewards that are important for summarization optimization.

reinforcement-learning Reinforcement Learning (RL) +1

Text Generation with Text-Editing Models

no code implementations NAACL (ACL) 2022 Eric Malmi, Yue Dong, Jonathan Mallinson, Aleksandr Chuklin, Jakub Adamek, Daniil Mirylenka, Felix Stahlberg, Sebastian Krause, Shankar Kumar, Aliaksei Severyn

Text-editing models have recently become a prominent alternative to seq2seq models for monolingual text-generation tasks such as grammatical error correction, simplification, and style transfer.

Grammatical Error Correction Style Transfer +1

Faithful to the Document or to the World? Mitigating Hallucinations via Entity-linked Knowledge in Abstractive Summarization

no code implementations28 Apr 2022 Yue Dong, John Wieting, Pat Verga

In this work, we show that these entities are not aberrations, but they instead require utilizing external world knowledge to infer reasoning paths from entities in the source.

Abstractive Text Summarization

Hallucinated but Factual! Inspecting the Factuality of Hallucinations in Abstractive Summarization

1 code implementation ACL 2022 Meng Cao, Yue Dong, Jackie Chi Kit Cheung

State-of-the-art abstractive summarization systems often generate \emph{hallucinations}; i. e., content that is not directly inferable from the source text.

Abstractive Text Summarization Reinforcement Learning (RL)

Discourse-Aware Unsupervised Summarization for Long Scientific Documents

1 code implementation EACL 2021 Yue Dong, Andrei Mircea, Jackie Chi Kit Cheung

We propose an unsupervised graph-based ranking model for extractive summarization of long scientific documents.

Extractive Summarization

Large Scale Image Completion via Co-Modulated Generative Adversarial Networks

1 code implementation ICLR 2021 Shengyu Zhao, Jonathan Cui, Yilun Sheng, Yue Dong, Xiao Liang, Eric I Chang, Yan Xu

To overcome this challenge, we propose a generic new approach that bridges the gap between image-conditional and recent modulated unconditional generative architectures via co-modulation of both conditional and stochastic style representations.

Image Inpainting Image-to-Image Translation +1

On-the-Fly Attention Modulation for Neural Generation

no code implementations Findings (ACL) 2021 Yue Dong, Chandra Bhagavatula, Ximing Lu, Jena D. Hwang, Antoine Bosselut, Jackie Chi Kit Cheung, Yejin Choi

Despite considerable advancements with deep neural language models (LMs), neural text generation still suffers from degeneration: the generated text is repetitive, generic, self-contradictory, and often lacks commonsense.

Language Modelling Text Generation

Large-Scale End-to-End Multilingual Speech Recognition and Language Identification with Multi-Task Learning

1 code implementation25 Oct 2020 Wenxin Hou, Yue Dong, Bairong Zhuang, Longfei Yang, Jiatong Shi, Takahiro Shinozaki

In this paper, we report a large-scale end-to-end language-independent multilingual model for joint automatic speech recognition (ASR) and language identification (LID).

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Factual Error Correction for Abstractive Summarization Models

1 code implementation EMNLP 2020 Meng Cao, Yue Dong, Jiapeng Wu, Jackie Chi Kit Cheung

Experimental results show that our model is able to correct factual errors in summaries generated by other neural summarization models and outperforms previous models on factual consistency evaluation on the CNN/DailyMail dataset.

Abstractive Text Summarization

Multi-Fact Correction in Abstractive Text Summarization

no code implementations EMNLP 2020 Yue Dong, Shuohang Wang, Zhe Gan, Yu Cheng, Jackie Chi Kit Cheung, Jingjing Liu

Pre-trained neural abstractive summarization systems have dominated extractive strategies on news summarization performance, at least in terms of ROUGE.

Abstractive Text Summarization News Summarization +1

Object-based Illumination Estimation with Rendering-aware Neural Networks

no code implementations ECCV 2020 Xin Wei, Guojun Chen, Yue Dong, Stephen Lin, Xin Tong

With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene, leading to improved realism.

Inverse Rendering

MaskFlownet: Asymmetric Feature Matching with Learnable Occlusion Mask

3 code implementations CVPR 2020 Shengyu Zhao, Yilun Sheng, Yue Dong, Eric I-Chao Chang, Yan Xu

In this paper, we propose an asymmetric occlusion-aware feature matching module, which can learn a rough occlusion mask that filters useless (occluded) areas immediately after feature warping without any explicit supervision.

Optical Flow Estimation

Countering the Effects of Lead Bias in News Summarization via Multi-Stage Training and Auxiliary Losses

no code implementations IJCNLP 2019 Matt Grenander, Yue Dong, Jackie Chi Kit Cheung, Annie Louis

Sentence position is a strong feature for news summarization, since the lead often (but not always) summarizes the key points of the article.

News Summarization

Recursive Cascaded Networks for Unsupervised Medical Image Registration

5 code implementations ICCV 2019 Shengyu Zhao, Yue Dong, Eric I-Chao Chang, Yan Xu

We present recursive cascaded networks, a general architecture that enables learning deep cascades, for deformable image registration.

Image Registration Medical Image Registration

EditNTS: An Neural Programmer-Interpreter Model for Sentence Simplification through Explicit Editing

1 code implementation ACL 2019 Yue Dong, Zichao Li, Mehdi Rezagholizadeh, Jackie Chi Kit Cheung

We present the first sentence simplification model that learns explicit edit operations (ADD, DELETE, and KEEP) via a neural programmer-interpreter approach.

Machine Translation Text Simplification +1

Synthesizing 3D Shapes from Silhouette Image Collections using Multi-projection Generative Adversarial Networks

no code implementations CVPR 2019 Xiao Li, Yue Dong, Pieter Peers, Xin Tong

Key to our method is a novel multi-projection generative adversarial network (MP-GAN) that trains a 3D shape generator to be consistent with multiple 2D projections of the 3D shapes, and without direct access to these 3D shapes.

Weakly-supervised Learning

Multi-task Learning over Graph Structures

no code implementations26 Nov 2018 Pengfei Liu, Jie Fu, Yue Dong, Xipeng Qiu, Jackie Chi Kit Cheung

We present two architectures for multi-task learning with neural sequence models.

General Classification Multi-Task Learning +2

A Hierarchical Neural Attention-based Text Classifier

1 code implementation EMNLP 2018 Koustuv Sinha, Yue Dong, Jackie Chi Kit Cheung, Derek Ruths

Deep neural networks have been displaying superior performance over traditional supervised classifiers in text classification.

General Classification text-classification +1

BanditSum: Extractive Summarization as a Contextual Bandit

1 code implementation EMNLP 2018 Yue Dong, Yikang Shen, Eric Crawford, Herke van Hoof, Jackie Chi Kit Cheung

In this work, we propose a novel method for training neural networks to perform single-document extractive summarization without heuristically-generated extractive labels.

Extractive Summarization Extractive Text Summarization

A Survey on Neural Network-Based Summarization Methods

no code implementations19 Mar 2018 Yue Dong

Automatic text summarization, the automated process of shortening a text while reserving the main ideas of the document(s), is a critical research area in natural language processing.

Text Summarization

Learning Non-Lambertian Object Intrinsics across ShapeNet Categories

1 code implementation CVPR 2017 Jian Shi, Yue Dong, Hao Su, Stella X. Yu

Rendered with realistic environment maps, millions of synthetic images of objects and their corresponding albedo, shading, and specular ground-truth images are used to train an encoder-decoder CNN.

Cannot find the paper you are looking for? You can Submit a new open access paper.