1 code implementation • 3 Oct 2024 • Anwoy Chatterjee, H S V N S Kowndinya Renduchintala, Sumit Bhatia, Tanmoy Chakraborty
Despite their remarkable capabilities, Large Language Models (LLMs) are found to be surprisingly sensitive to minor variations in prompts, often generating significantly divergent outputs in response to minor variations in the prompts, such as spelling errors, alteration of wording or the prompt template.
no code implementations • 16 May 2024 • Shaz Furniturewala, Surgan Jandial, Abhinav Java, Pragyan Banerjee, Simra Shahid, Sumit Bhatia, Kokil Jaidka
Existing debiasing techniques are typically training-based or require access to the model's internals and output distributions, so they are inaccessible to end-users looking to adapt LLM outputs for their particular needs.
no code implementations • 14 Mar 2024 • Balaji Ganesan, Matheen Ahmed Pasha, Srinivasa Parkala, Neeraj R Singh, Gayatri Mishra, Sumit Bhatia, Hima Patel, Somashekar Naganna, Sameep Mehta
Explaining neural model predictions to users requires creativity.
1 code implementation • 13 Mar 2024 • H S V N S Kowndinya Renduchintala, Sumit Bhatia, Ganesh Ramakrishnan
Instruction Tuning involves finetuning a language model on a collection of instruction-formatted datasets in order to enhance the generalizability of the model to unseen tasks.
1 code implementation • 2 Feb 2024 • Sohan Patnaik, Heril Changwal, Milan Aggarwal, Sumit Bhatia, Yaman Kumar, Balaji Krishnamurthy
Typically, only a small part of the whole table is relevant to derive the answer for a given question.
Ranked #1 on
Semantic Parsing
on WikiSQL
(Denotation accuracy (test) metric)
1 code implementation • 9 Nov 2023 • Pragyan Banerjee, Abhinav Java, Surgan Jandial, Simra Shahid, Shaz Furniturewala, Balaji Krishnamurthy, Sumit Bhatia
Fairness in Language Models (LMs) remains a longstanding challenge, given the inherent biases in training data that can be perpetuated by models and affect the downstream tasks.
no code implementations • 9 Aug 2023 • Gunjan Singh, Sumit Bhatia, Raghava Mutharaju
Ontologies are used in various domains, with RDF and OWL being prominent standards for ontology development.
no code implementations • 14 Jul 2023 • Shivani Kumar, Sumit Bhatia, Milan Aggarwal, Tanmoy Chakraborty
To this end, we propose UNIT, a UNified dIalogue dataseT constructed from conversations of existing datasets for different dialogue tasks capturing the nuances for each of them.
1 code implementation • 16 May 2023 • Simra Shahid, Tanay Anand, Nikitha Srikanth, Sumit Bhatia, Balaji Krishnamurthy, Nikaash Puri
We present HyHTM - a Hyperbolic geometry based Hierarchical Topic Models - that addresses these limitations by incorporating hierarchical information from hyperbolic geometry to explicitly model hierarchies in topic models.
1 code implementation • 11 May 2023 • H S V N S Kowndinya Renduchintala, KrishnaTeja Killamsetty, Sumit Bhatia, Milan Aggarwal, Ganesh Ramakrishnan, Rishabh Iyer, Balaji Krishnamurthy
A salient characteristic of pre-trained language models (PTLMs) is a remarkable improvement in their generalization capability and emergence of new capabilities with increasing model capacity and pre-training dataset size.
1 code implementation • 25 Apr 2023 • Michael Llordes, Debasis Ganguly, Sumit Bhatia, Chirag Agarwal
Neural retrieval models (NRMs) have been shown to outperform their statistical counterparts owing to their ability to capture semantic meaning via dense document representations.
1 code implementation • 13 Sep 2022 • Sumit Neelam, Udit Sharma, Sumit Bhatia, Hima Karanam, Ankita Likhyani, Ibrahim Abdelaziz, Achille Fokoue, L. V. Subramaniam
Resource Description Framework (RDF) and Property Graph (PG) are the two most commonly used data models for representing, storing, and querying graph data.
1 code implementation • Findings (NAACL) 2022 • Jivat Neet Kaur, Sumit Bhatia, Milan Aggarwal, Rachit Bansal, Balaji Krishnamurthy
Large transformer-based pre-trained language models have achieved impressive performance on a variety of knowledge-intensive tasks and can capture factual knowledge in their parameters.
no code implementations • NAACL 2022 • Rachit Bansal, Milan Aggarwal, Sumit Bhatia, Jivat Neet Kaur, Balaji Krishnamurthy
To train CoSe-Co, we propose a novel dataset comprising of sentence and commonsense knowledge pairs.
1 code implementation • 28 May 2022 • Shashank Goel, Hritik Bansal, Sumit Bhatia, Ryan A. Rossi, Vishwa Vinay, Aditya Grover
Recent advances in contrastive representation learning over paired image-text data have led to models such as CLIP that achieve state-of-the-art performance for zero-shot classification and distributional robustness.
1 code implementation • 20 Jan 2022 • Manjot Bedi, Tanisha Pandey, Sumit Bhatia, Tanmoy Chakraborty
We frame the problem as a binary classification task where all the references in a paper are to be classified as either baselines or non-baselines.
no code implementations • 3 Nov 2021 • Kritika Venkatachalam, Raghava Mutharaju, Sumit Bhatia
We propose an LSTM based model for temporal and causal relation classification that captures the interrelations between the three encoded features.
no code implementations • 20 Oct 2021 • Biswesh Mohapatra, Sumit Bhatia, Raghava Mutharaju, G. Srinivasaraghavan
However, most of the existing KG embeddings only consider the network structure of the graph and ignore the semantics and the characteristics of the underlying ontology that provides crucial information about relationships between entities in the KG.
no code implementations • AKBC Workshop CSKB 2021 • Rachit Bansal, Milan Aggarwal, Sumit Bhatia, Jivat Neet Kaur, Balaji Krishnamurthy
Pre-trained Language Models (PTLMs) have been shown to perform well on natural language reasoning tasks requiring commonsense.
no code implementations • AKBC Workshop CSKB 2021 • Jivat Neet Kaur, Sumit Bhatia, Milan Aggarwal, Rachit Bansal, Balaji Krishnamurthy
This allows the training of the language model to be de-coupled from the external knowledge source and the latter can be updated without affecting the parameters of the language model.
no code implementations • COLING 2020 • Jaydeep Sen, Tanaya Babtiwale, Kanishk Saxena, Yash Butala, Sumit Bhatia, Karthik Sankaranarayanan
We posit that deploying ontology reasoning over domain semantics can help in achieving better natural language understanding for QA systems.
no code implementations • 14 May 2020 • Sérgio Nunes, Suzanne Little, Sumit Bhatia, Ludovico Boratto, Guillaume Cabanac, Ricardo Campos, Francisco M. Couto, Stefano Faralli, Ingo Frommholz, Adam Jatowt, Alípio Jorge, Mirko Marras, Philipp Mayr, Giovanni Stilo
In this report, we describe the experience of organizing the ECIR 2020 Workshops in this scenario from two perspectives: the workshop organizers and the workshop participants.
no code implementations • LREC 2020 • Dwaipayan Roy, Sumit Bhatia, Prateek Jain
Wikipedia is the largest web-based open encyclopedia covering more than three hundred languages.
no code implementations • 7 Mar 2020 • Balaji Ganesan, Srinivas Parkala, Neeraj R Singh, Sumit Bhatia, Gayatri Mishra, Matheen Ahmed Pasha, Hima Patel, Somashekar Naganna
Learning graph representations of n-ary relational data has a number of real world applications like anti-money laundering, fraud detection, and customer due diligence.
no code implementations • WS 2018 • Sumit Bhatia, Deepak P
Ideological leanings of an individual can often be gauged by the sentiment one expresses about different issues.
no code implementations • 25 Mar 2018 • Vinith Misra, Sumit Bhatia
Just as semantic hashing can accelerate information retrieval, binary valued embeddings can significantly reduce latency in the retrieval of graphical data.
no code implementations • 17 Mar 2018 • Sumit Bhatia, Purusharth Dwivedi, Avneet Kaur
We address the problem of finding descriptive explanations of facts stored in a knowledge graph.