no code implementations • 5 Oct 2023 • Litton J Kurisinkel, Nancy F Chen
Memory-efficient large language models are good at refining text input for better readability.
no code implementations • 5 Oct 2023 • Litton J Kurisinkel, Nancy F. Chen
Multi-document summarization is a challenging task due to its inherent subjective bias, highlighted by the low inter-annotator ROUGE-1 score of 0. 4 among DUC-2004 reference summaries.
no code implementations • NAACL 2021 • Litton J Kurisinkel, Ai Ti Aw, Nancy F Chen
Neural models for text generation are often designed in an end-to-end fashion, typically with zero control over intermediate computations, limiting their practical usability in downstream applications.
no code implementations • IJCNLP 2019 • Litton J Kurisinkel, Nancy Chen
This task differs from other natural language generation tasks in the following ways: (1) The input is a set of identifiable entities (ICD codes) where the relations between individual entity are not explicitly specified.
1 code implementation • 29 Aug 2018 • Pinkesh Badjatiya, Litton J Kurisinkel, Manish Gupta, Vasudeva Varma
Text segmentation plays an important role in various Natural Language Processing (NLP) tasks like summarization, context understanding, document indexing and document noise removal.
no code implementations • 9 Aug 2018 • Pruthwik Mishra, Litton J Kurisinkel, Dipti Misra Sharma
The frame identification is dependent on the verb in a sentence.
no code implementations • IJCNLP 2017 • Litton J Kurisinkel, Yue Zhang, Vasudeva Varma
The method entrusts the summarizer to generate its own topically coherent sequential structures from scratch for effective communication.
no code implementations • IJCNLP 2017 • Raghuram Vadapalli, Litton J Kurisinkel, Manish Gupta, Vasudeva Varma
Ideally a metric evaluating an abstract system summary should represent the extent to which the system-generated summary approximates the semantic inference conceived by the reader using a human-written reference summary.
Abstractive Text Summarization Natural Language Inference +2