Search Results for author: Tanya Goyal

Found 10 papers, 5 papers with code

Contemporary NLP Modeling in Six Comprehensive Programming Assignments

no code implementations NAACL (TeachingNLP) 2021 Greg Durrett, Jifan Chen, Shrey Desai, Tanya Goyal, Lucas Kabela, Yasumasa Onoe, Jiacheng Xu

We present a series of programming assignments, adaptable to a range of experience levels from advanced undergraduate to PhD, to teach students design and implementation of modern NLP systems.

Training Dynamics for Text Summarization Models

no code implementations Findings (ACL) 2022 Tanya Goyal, Jiacheng Xu, Junyi Jessy Li, Greg Durrett

Across different datasets (CNN/DM, XSum, MediaSum) and summary properties, such as abstractiveness and hallucination, we study what the model learns at different stages of its fine-tuning process.

News Summarization Text Summarization

HydraSum: Disentangling Stylistic Features in Text Summarization using Multi-Decoder Models

1 code implementation8 Oct 2021 Tanya Goyal, Nazneen Fatema Rajani, Wenhao Liu, Wojciech Kryściński

Existing abstractive summarization models lack explicit control mechanisms that would allow users to influence the stylistic features of the model outputs.

Abstractive Text Summarization

HydraSum - Disentangling Stylistic Features in Text Summarization using Multi-Decoder Models

no code implementations29 Sep 2021 Tanya Goyal, Nazneen Rajani, Wenhao Liu, Wojciech Maciej Kryscinski

Existing abstractive summarization models lack explicit control mechanisms that would allow users to influence the stylistic features of the model outputs.

Abstractive Text Summarization

Annotating and Modeling Fine-grained Factuality in Summarization

2 code implementations NAACL 2021 Tanya Goyal, Greg Durrett

Recent pre-trained abstractive summarization systems have started to achieve credible performance, but a major barrier to their use in practice is their propensity to output summaries that are not faithful to the input and that contain factual errors.

Abstractive Text Summarization

Evaluating Factuality in Generation with Dependency-level Entailment

1 code implementation Findings of the Association for Computational Linguistics 2020 Tanya Goyal, Greg Durrett

Experiments show that our dependency arc entailment model trained on this data can identify factual inconsistencies in paraphrasing and summarization better than sentence-level methods or those based on question generation, while additionally localizing the erroneous parts of the generation.

Natural Language Inference Question Generation +1

Neural Syntactic Preordering for Controlled Paraphrase Generation

2 code implementations ACL 2020 Tanya Goyal, Greg Durrett

Paraphrasing natural language sentences is a multifaceted process: it might involve replacing individual words or short phrases, local rearrangement of content, or high-level restructuring like topicalization or passivization.

Machine Translation Paraphrase Generation +1

Embedding time expressions for deep temporal ordering models

3 code implementations ACL 2019 Tanya Goyal, Greg Durrett

Data-driven models have demonstrated state-of-the-art performance in inferring the temporal ordering of events in text.

Cannot find the paper you are looking for? You can Submit a new open access paper.