1 code implementation • 4 Jan 2024 • Rachit Bansal, Bidisha Samanta, Siddharth Dalmia, Nitish Gupta, Shikhar Vashishth, Sriram Ganapathy, Abhishek Bapna, Prateek Jain, Partha Talukdar
Foundational models with billions of parameters which have been trained on large corpora of data have demonstrated non-trivial skills in a variety of domains.
no code implementations • 17 Oct 2022 • Rachit Bansal, Danish Pruthi, Yonatan Belinkov
In this work, we hypothesize -- and subsequently show -- that the diversity in the activation patterns of different neurons is reflective of model generalization and memorization.
1 code implementation • Findings (NAACL) 2022 • Jivat Neet Kaur, Sumit Bhatia, Milan Aggarwal, Rachit Bansal, Balaji Krishnamurthy
Large transformer-based pre-trained language models have achieved impressive performance on a variety of knowledge-intensive tasks and can capture factual knowledge in their parameters.
no code implementations • NAACL 2022 • Rachit Bansal, Milan Aggarwal, Sumit Bhatia, Jivat Neet Kaur, Balaji Krishnamurthy
To train CoSe-Co, we propose a novel dataset comprising of sentence and commonsense knowledge pairs.
1 code implementation • 24 May 2022 • Jeevesh Juneja, Rachit Bansal, Kyunghyun Cho, João Sedoc, Naomi Saphra
It is widely accepted in the mode connectivity literature that when two neural networks are trained similarly on the same data, they are connected by a path through parameter space over which test set accuracy is maintained.
no code implementations • AKBC Workshop CSKB 2021 • Rachit Bansal, Milan Aggarwal, Sumit Bhatia, Jivat Neet Kaur, Balaji Krishnamurthy
Pre-trained Language Models (PTLMs) have been shown to perform well on natural language reasoning tasks requiring commonsense.
no code implementations • AKBC Workshop CSKB 2021 • Jivat Neet Kaur, Sumit Bhatia, Milan Aggarwal, Rachit Bansal, Balaji Krishnamurthy
This allows the training of the language model to be de-coupled from the external knowledge source and the latter can be updated without affecting the parameters of the language model.
2 code implementations • ACL 2021 • Rachit Bansal, Himanshu Choudhary, Ravneet Punia, Niko Schenk, Jacob L Dahl, Émilie Pagé-Perron
Despite the recent advancements of attention-based deep learning architectures across a majority of Natural Language Processing tasks, their application remains limited in a low-resource setting because of a lack of pre-trained models for such languages.
1 code implementation • 12 Apr 2021 • Rachit Bansal, William Scott Paka, Nidhi, Shubhashis Sengupta, Tanmoy Chakraborty
In this work, we present ENDEMIC, a novel early detection model which leverages exogenous and endogenous signals related to tweets, while learning on limited labeled data.
1 code implementation • 17 Feb 2021 • William Scott Paka, Rachit Bansal, Abhay Kaushik, Shubhashis Sengupta, Tanmoy Chakraborty
As the COVID-19 pandemic sweeps across the world, it has been accompanied by a tsunami of fake news and misinformation on social media.
1 code implementation • 1 Dec 2020 • Danish Pruthi, Rachit Bansal, Bhuwan Dhingra, Livio Baldini Soares, Michael Collins, Zachary C. Lipton, Graham Neubig, William W. Cohen
While many methods purport to explain predictions by highlighting salient features, what aims these explanations serve and how they ought to be evaluated often go unstated.