Search Results for author: Sanjeev Kumar Karn

Found 10 papers, 1 papers with code

News Article Teaser Tweets and How to Generate Them

2 code implementations NAACL 2019 Sanjeev Kumar Karn, Mark Buckley, Ulli Waltinger, Hinrich Schütze

In this work, we define the task of teaser generation and provide an evaluation benchmark and baseline systems for the process of generating teasers.

A Hierarchical Decoder with Three-level Hierarchical Attention to Generate Abstractive Summaries of Interleaved Texts

no code implementations5 Jun 2019 Sanjeev Kumar Karn, Francine Chen, Yan-Ying Chen, Ulli Waltinger, Hinrich Schütze

Interleaved texts, where posts belonging to different threads occur in one sequence, are a common occurrence, e. g., online chat conversations.

Generating Multi-Sentence Abstractive Summaries of Interleaved Texts

no code implementations25 Sep 2019 Sanjeev Kumar Karn, Francine Chen, Yan-Ying Chen, Ulli Waltinger, Hinrich Schütze

The interleaved posts are encoded hierarchically, i. e., word-to-word (words in a post) followed by post-to-post (posts in a channel).

Disentanglement Sentence

Few-Shot Learning of an Interleaved Text Summarization Model by Pretraining with Synthetic Data

no code implementations EACL (AdaptNLP) 2021 Sanjeev Kumar Karn, Francine Chen, Yan-Ying Chen, Ulli Waltinger, Hinrich Schuetze

Interleaved texts, where posts belonging to different threads occur in a sequence, commonly occur in online chat posts, so that it can be time-consuming to quickly obtain an overview of the discussions.

Disentanglement Few-Shot Learning +1

shs-nlp at RadSum23: Domain-Adaptive Pre-training of Instruction-tuned LLMs for Radiology Report Impression Generation

no code implementations5 Jun 2023 Sanjeev Kumar Karn, Rikhiya Ghosh, Kusuma P, Oladimeji Farri

Instruction-tuned generative Large language models (LLMs) like ChatGPT and Bloomz possess excellent generalization abilities, but they face limitations in understanding radiology reports, particularly in the task of generating the IMPRESSIONS section from the FINDINGS section.

Fusion of Domain-Adapted Vision and Language Models for Medical Visual Question Answering

no code implementations24 Apr 2024 Cuong Nhat Ha, Shima Asaadi, Sanjeev Kumar Karn, Oladimeji Farri, Tobias Heimann, Thomas Runkler

Vision-language models, while effective in general domains and showing strong performance in diverse multi-modal applications like visual question-answering (VQA), struggle to maintain the same level of effectiveness in more specialized domains, e. g., medical.

Cannot find the paper you are looking for? You can Submit a new open access paper.