Search Results for author: Bidisha Samanta

Found 15 papers, 6 papers with code

A Data Bootstrapping Recipe for Low-Resource Multilingual Relation Classification

no code implementations CoNLL (EMNLP) 2021 Arijit Nag, Bidisha Samanta, Animesh Mukherjee, Niloy Ganguly, Soumen Chakrabarti

Data collection is challenging for Indian languages, because they are syntactically and morphologically diverse, as well as different from resource-rich languages like English.

Classification Relation +1

LLM Augmented LLMs: Expanding Capabilities through Composition

1 code implementation4 Jan 2024 Rachit Bansal, Bidisha Samanta, Siddharth Dalmia, Nitish Gupta, Shikhar Vashishth, Sriram Ganapathy, Abhishek Bapna, Prateek Jain, Partha Talukdar

Foundational models with billions of parameters which have been trained on large corpora of data have demonstrated non-trivial skills in a variety of domains.

Arithmetic Reasoning Code Generation

CrysGNN : Distilling pre-trained knowledge to enhance property prediction for crystalline materials

1 code implementation14 Jan 2023 Kishalay Das, Bidisha Samanta, Pawan Goyal, Seung-Cheol Lee, Satadeep Bhattacharjee, Niloy Ganguly

To leverage these untapped data, this paper presents CrysGNN, a new pre-trained GNN framework for crystalline materials, which captures both node and graph level structural information of crystal graphs using a huge amount of unlabelled material data.

Formation Energy Graph Neural Network +1

Bootstrapping Multilingual Semantic Parsers using Large Language Models

no code implementations13 Oct 2022 Abhijeet Awasthi, Nitish Gupta, Bidisha Samanta, Shachi Dave, Sunita Sarawagi, Partha Talukdar

Despite cross-lingual generalization demonstrated by pre-trained multilingual models, the translate-train paradigm of transferring English datasets across multiple languages remains to be a key mechanism for training task-specific multilingual models.

Semantic Parsing Translation

A Data Bootstrapping Recipe for Low Resource Multilingual Relation Classification

no code implementations18 Oct 2021 Arijit Nag, Bidisha Samanta, Animesh Mukherjee, Niloy Ganguly, Soumen Chakrabarti

Relation classification (sometimes called 'extraction') requires trustworthy datasets for fine-tuning large language models, as well as for evaluation.

Classification Relation +1

Few-shot Controllable Style Transfer for Low-Resource Multilingual Settings

no code implementations ACL 2022 Kalpesh Krishna, Deepak Nathani, Xavier Garcia, Bidisha Samanta, Partha Talukdar

When compared to prior work, our model achieves 2-3x better performance in formality transfer and code-mixing addition across seven languages.

Attribute Sentence +3

Fine-grained Sentiment Controlled Text Generation

no code implementations17 Jun 2020 Bidisha Samanta, Mohit Agarwal, Niloy Ganguly

DE-VAE achieves better control of sentiment as an attribute while preserving the content by learning a suitable lossless transformation network from the disentangled sentiment space to the desired entangled representation.

Attribute Text Generation

A Deep Generative Model for Code-Switched Text

1 code implementation21 Jun 2019 Bidisha Samanta, Sharmila Reddy, Hussain Jagirdar, Niloy Ganguly, Soumen Chakrabarti

Code-switching, the interleaving of two or more languages within a sentence or discourse is pervasive in multilingual societies.


NeVAE: A Deep Generative Model for Molecular Graphs

2 code implementations14 Feb 2018 Bidisha Samanta, Abir De, Gourhari Jana, Pratim Kumar Chattaraj, Niloy Ganguly, Manuel Gomez-Rodriguez

Moreover, in contrast with the state of the art, our decoder is able to provide the spatial coordinates of the atoms of the molecules it generates.

Bayesian Optimization Decoder

Is this word borrowed? An automatic approach to quantify the likeliness of borrowing in social media

no code implementations15 Mar 2017 Jasabanta Patro, Bidisha Samanta, Saurabh Singh, Prithwish Mukherjee, Monojit Choudhury, Animesh Mukherjee

We first propose context based clustering method to sample a set of candidate words from the social media data. Next, we propose three novel and similar metrics based on the usage of these words by the users in different tweets; these metrics were used to score and rank the candidate words indicating their borrowed likeliness.


Cannot find the paper you are looking for? You can Submit a new open access paper.