Search Results for author: David Wan

Found 11 papers, 6 papers with code

Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training

no code implementations4 Mar 2024 David Wan, Jaemin Cho, Elias Stengel-Eskin, Mohit Bansal

Highlighting particularly relevant regions of an image can improve the performance of vision-language models (VLMs) on various vision-language (VL) tasks by guiding the model to attend more closely to these regions of interest.

Math Phrase Grounding +2

HistAlign: Improving Context Dependency in Language Generation by Aligning with History

1 code implementation8 May 2023 David Wan, Shiyue Zhang, Mohit Bansal

Cache-LMs, which augment LMs with a memory of recent history, can increase context dependency and have shown remarkable performance in diverse language generation tasks.

Abstractive Text Summarization Text Generation

Faithfulness-Aware Decoding Strategies for Abstractive Summarization

1 code implementation6 Mar 2023 David Wan, Mengwen Liu, Kathleen McKeown, Markus Dreyer, Mohit Bansal

We present a systematic study of the effect of generation techniques such as beam search and nucleus sampling on faithfulness in abstractive summarization.

Abstractive Text Summarization

Evaluating and Improving Factuality in Multimodal Abstractive Summarization

1 code implementation4 Nov 2022 David Wan, Mohit Bansal

Current metrics for evaluating factuality for abstractive document summarization have achieved high correlations with human judgment, but they do not account for the vision modality and thus are not adequate for vision-and-language summarization.

Abstractive Text Summarization Document Summarization

Extractive is not Faithful: An Investigation of Broad Unfaithfulness Problems in Extractive Summarization

1 code implementation8 Sep 2022 Shiyue Zhang, David Wan, Mohit Bansal

Though extractive summarization is less prone to the common unfaithfulness issues of abstractive summaries, does that mean extractive is equal to faithful?

Abstractive Text Summarization Extractive Summarization

FactPEGASUS: Factuality-Aware Pre-training and Fine-tuning for Abstractive Summarization

1 code implementation NAACL 2022 David Wan, Mohit Bansal

We present FactPEGASUS, an abstractive summarization model that addresses the problem of factuality during pre-training and fine-tuning: (1) We augment the sentence selection strategy of PEGASUS's (Zhang et al., 2020) pre-training objective to create pseudo-summaries that are both important and factual; (2) We introduce three complementary components for fine-tuning.

Abstractive Text Summarization Contrastive Learning +1

Segmenting Subtitles for Correcting ASR Segmentation Errors

no code implementations EACL 2021 David Wan, Chris Kedzie, Faisal Ladhak, Elsbeth Turcan, Petra Galuščáková, Elena Zotkina, Zhengping Jiang, Peter Bell, Kathleen McKeown

Typical ASR systems segment the input audio into utterances using purely acoustic information, which may not resemble the sentence-like units that are expected by conventional machine translation (MT) systems for Spoken Language Translation.

Information Retrieval Machine Translation +4

Incorporating Terminology Constraints in Automatic Post-Editing

1 code implementation WMT (EMNLP) 2020 David Wan, Chris Kedzie, Faisal Ladhak, Marine Carpuat, Kathleen McKeown

In this paper, we present both autoregressive and non-autoregressive models for lexically constrained APE, demonstrating that our approach enables preservation of 95% of the terminologies and also improves translation quality on English-German benchmarks.

Automatic Post-Editing Data Augmentation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.