Search Results for author: Prafulla Kumar Choubey

Found 24 papers, 9 papers with code

Event Coreference Resolution by Iteratively Unfolding Inter-dependencies among Events

no code implementations EMNLP 2017 Prafulla Kumar Choubey, Ruihong Huang

We introduce a novel iterative approach for event coreference resolution that gradually builds event clusters by exploiting inter-dependencies among event mentions within the same chain as well as across event chains.

Clustering coreference-resolution +1

TAMU at KBP 2017: Event Nugget Detection and Coreference Resolution

1 code implementation6 Nov 2017 Prafulla Kumar Choubey, Ruihong Huang

Our simple system designed using minimal features achieved the micro-average F1 scores of 57. 72, 44. 27 and 42. 47 for event span detection, type identification and realis status classification tasks respectively.

coreference-resolution Event Coreference Resolution

Identifying the Most Dominant Event in a News Article by Mining Event Coreference Relations

no code implementations NAACL 2018 Prafulla Kumar Choubey, Kaushik Raju, Ruihong Huang

Identifying the most dominant and central event of a document, which governs and connects other foreground and background events in the document, is useful for many applications, such as text summarization, storyline generation and text segmentation.

Text Segmentation Text Summarization

Improving Event Coreference Resolution by Modeling Correlations between Event Coreference Chains and Document Topic Structures

no code implementations ACL 2018 Prafulla Kumar Choubey, Ruihong Huang

This paper proposes a novel approach for event coreference resolution that models correlations between event coreference chains and document topical structures through an Integer Linear Programming formulation.

coreference-resolution Event Coreference Resolution +1

Improving Dialogue State Tracking by Discerning the Relevant Context

no code implementations NAACL 2019 Sanuj Sharma, Prafulla Kumar Choubey, Ruihong Huang

Specifically, we use the current user utterance and the most recent system utterance to determine the relevance of a system utterance.

Dialogue State Tracking

Modeling Document-level Causal Structures for Event Causal Relation Identification

no code implementations NAACL 2019 Lei Gao, Prafulla Kumar Choubey, Ruihong Huang

We aim to comprehensively identify all the event causal relations in a document, both within a sentence and across sentences, which is important for reconstructing pivotal event structures.

Relation Sentence

In Plain Sight: Media Bias Through the Lens of Factual Reporting

1 code implementation IJCNLP 2019 Lisa Fan, Marshall White, Eva Sharma, Ruisi Su, Prafulla Kumar Choubey, Ruihong Huang, Lu Wang

The increasing prevalence of political bias in news media calls for greater public awareness of it, as well as robust methods for its detection.

Automatic Data Acquisition for Event Coreference Resolution

1 code implementation EACL 2021 Prafulla Kumar Choubey, Ruihong Huang

We propose to leverage lexical paraphrases and high precision rules informed by news discourse structure to automatically collect coreferential and non-coreferential event pairs from unlabeled English news articles.

coreference-resolution Event Coreference Resolution

Improving Gender Translation Accuracy with Filtered Self-Training

no code implementations15 Apr 2021 Prafulla Kumar Choubey, Anna Currey, Prashant Mathur, Georgiana Dinu

Targeted evaluations have found that machine translation systems often output incorrect gender, even when the gender is clear from context.

Machine Translation Sentence +1

CaPE: Contrastive Parameter Ensembling for Reducing Hallucination in Abstractive Summarization

no code implementations14 Oct 2021 Prafulla Kumar Choubey, Alexander R. Fabbri, Jesse Vig, Chien-Sheng Wu, Wenhao Liu, Nazneen Fatema Rajani

Then, we fine-tune a base summarization model, which is trained on all training samples, on the clean (noisy) subset to obtain an \textit{expert} (\textit{anti-expert}) model.

Abstractive Text Summarization Hallucination +1

Modeling Document-level Temporal Structures for Building Temporal Dependency Graphs

1 code implementation21 Oct 2022 Prafulla Kumar Choubey, Ruihong Huang

We propose to leverage news discourse profiling to model document-level temporal structures for building temporal dependency graphs.

Knowledge Distillation Sentence

Model ensemble instead of prompt fusion: a sample-specific knowledge transfer method for few-shot prompt tuning

no code implementations23 Oct 2022 Xiangyu Peng, Chen Xing, Prafulla Kumar Choubey, Chien-Sheng Wu, Caiming Xiong

Through this way, SESoM inherits the superior generalization of model ensemble approaches and simultaneously captures the sample-specific competence of each source prompt.

Transfer Learning

Improving Factual Consistency in Summarization with Compression-Based Post-Editing

1 code implementation11 Nov 2022 Alexander R. Fabbri, Prafulla Kumar Choubey, Jesse Vig, Chien-Sheng Wu, Caiming Xiong

We propose to use sentence-compression data to train the post-editing model to take a summary with extrinsic entity errors marked with special tokens and output a compressed, well-formed summary with those errors removed.

Informativeness Sentence +1

XGen-7B Technical Report

1 code implementation7 Sep 2023 Erik Nijkamp, Tian Xie, Hiroaki Hayashi, Bo Pang, Congying Xia, Chen Xing, Jesse Vig, Semih Yavuz, Philippe Laban, Ben Krause, Senthil Purushwalkam, Tong Niu, Wojciech Kryściński, Lidiya Murakhovs'ka, Prafulla Kumar Choubey, Alex Fabbri, Ye Liu, Rui Meng, Lifu Tu, Meghana Bhat, Chien-Sheng Wu, Silvio Savarese, Yingbo Zhou, Shafiq Joty, Caiming Xiong

Most open-source LLMs, on the other hand, are limited in their ability to support longer sequence lengths, which is a key requirement for many tasks that require inference over an input context.

Lexical Repetitions Lead to Rote Learning: Unveiling the Impact of Lexical Overlap in Train and Test Reference Summaries

no code implementations15 Nov 2023 Prafulla Kumar Choubey, Alexander R. Fabbri, Caiming Xiong, Chien-Sheng Wu

Ideal summarization models should generalize to novel summary-worthy content without remembering reference training summaries by rote.

GFST: Gender-Filtered Self-Training for More Accurate Gender in Translation

1 code implementation EMNLP 2021 Prafulla Kumar Choubey, Anna Currey, Prashant Mathur, Georgiana Dinu

Targeted evaluations have found that machine translation systems often output incorrect gender in translations, even when the gender is clear from context.

Machine Translation Translation

Predicting Sentence Deletions for Text Simplification Using a Functional Discourse Structure

no code implementations ACL 2022 Bohan Zhang, Prafulla Kumar Choubey, Ruihong Huang

Document-level text simplification often deletes some sentences besides performing lexical, grammatical or structural simplification to reduce text complexity.

Sentence Text Simplification

Cannot find the paper you are looking for? You can Submit a new open access paper.