Search Results for author: Koustava Goswami

Found 14 papers, 3 papers with code

ULD-NUIG at Social Media Mining for Health Applications (#SMM4H) Shared Task 2021

no code implementations NAACL (SMM4H) 2021 Atul Kr. Ojha, Priya Rani, Koustava Goswami, Bharathi Raja Chakravarthi, John P. McCrae

Social media platforms such as Twitter and Facebook have been utilised for various research studies, from the cohort-level discussion to community-driven approaches to address the challenges in utilizing social media data for health, clinical and biomedical information.

named-entity-recognition Named Entity Recognition +1

Cross-lingual Sentence Embedding using Multi-Task Learning

no code implementations EMNLP 2021 Koustava Goswami, Sourav Dutta, Haytham Assem, Theodorus Fransen, John P. McCrae

We demonstrate the efficacy of an unsupervised as well as a weakly supervised variant of our framework on STS, BUCC and Tatoeba benchmark tasks.

Multi-Task Learning Semantic Similarity +6

"Confidently Nonsensical?'': A Critical Survey on the Perspectives and Challenges of 'Hallucinations' in NLP

no code implementations11 Apr 2024 Pranav Narayanan Venkit, Tatiana Chakravorti, Vipul Gupta, Heidi Biggs, Mukund Srinath, Koustava Goswami, Sarah Rajtmajer, Shomir Wilson

We investigate how hallucination in large language models (LLM) is characterized in peer-reviewed literature using a critical examination of 103 publications across NLP research.

Hallucination

Social Media Ready Caption Generation for Brands

no code implementations3 Jan 2024 Himanshu Maheshwari, Koustava Goswami, Apoorv Saxena, Balaji Vasan Srinivasan

Our architecture is based on two parts: a the first part contains an image captioning model that takes in an image that the brand wants to post online and gives a plain English caption; b the second part takes in the generated caption along with the target brand personality and outputs a catchy personality-aligned social media caption.

Caption Generation Image Captioning +1

Drilling Down into the Discourse Structure with LLMs for Long Document Question Answering

no code implementations22 Nov 2023 Inderjeet Nair, Shwetha Somasundaram, Apoorv Saxena, Koustava Goswami

We address the task of evidence retrieval for long document question answering, which involves locating relevant paragraphs within a document to answer a question.

Multi-hop Question Answering Question Answering +1

Weakly-supervised Deep Cognate Detection Framework for Low-Resourced Languages Using Morphological Knowledge of Closely-Related Languages

1 code implementation9 Nov 2023 Koustava Goswami, Priya Rani, Theodorus Fransen, John P. McCrae

We train an encoder to gain morphological knowledge of a language and transfer the knowledge to perform unsupervised and weakly-supervised cognate detection tasks with and without the pivot language for the closely-related languages.

Information Retrieval named-entity-recognition +3

CoPL: Contextual Prompt Learning for Vision-Language Understanding

no code implementations3 Jul 2023 Koustava Goswami, Srikrishna Karanam, Prateksha Udhayanan, K J Joseph, Balaji Vasan Srinivasan

Our key innovations over earlier works include using local image features as part of the prompt learning process, and more crucially, learning to weight these prompts based on local features that are appropriate for the task at hand.

A-STAR: Test-time Attention Segregation and Retention for Text-to-image Synthesis

no code implementations ICCV 2023 Aishwarya Agarwal, Srikrishna Karanam, K J Joseph, Apoorv Saxena, Koustava Goswami, Balaji Vasan Srinivasan

First, our attention segregation loss reduces the cross-attention overlap between attention maps of different concepts in the text prompt, thereby reducing the confusion/conflict among various concepts and the eventual capture of all concepts in the generated output.

Denoising Image Generation

SwitchPrompt: Learning Domain-Specific Gated Soft Prompts for Classification in Low-Resource Domains

1 code implementation14 Feb 2023 Koustava Goswami, Lukas Lange, Jun Araki, Heike Adel

Prompting pre-trained language models leads to promising results across natural language processing tasks but is less effective when applied in low-resource domains, due to the domain gap between the pre-training data and the downstream task.

Language Modelling text-classification +1

Unsupervised Deep Language and Dialect Identification for Short Texts

no code implementations COLING 2020 Koustava Goswami, Rajdeep Sarkar, Bharathi Raja Chakravarthi, Theodorus Fransen, John P. McCrae

Automatic Language Identification (LI) or Dialect Identification (DI) of short texts of closely related languages or dialects, is one of the primary steps in many natural language processing pipelines.

Dialect Identification Sentence +1

Cannot find the paper you are looking for? You can Submit a new open access paper.