2 code implementations • Findings (NAACL) 2022 • G P Shrivatsa Bhargav, Dinesh Khandelwal, Saswati Dana, Dinesh Garg, Pavan Kapanipathi, Salim Roukos, Alexander Gray, L Venkata Subramaniam
Interestingly, we discovered that BLINK exhibits diminishing returns, i. e., it reaches 98% of its performance with just 1% of the training data and the remaining 99% of the data yields only a marginal increase of 2% in the performance.
no code implementations • 15 Jan 2022 • Sumit Neelam, Udit Sharma, Hima Karanam, Shajith Ikbal, Pavan Kapanipathi, Ibrahim Abdelaziz, Nandana Mihindukulasooriya, Young-suk Lee, Santosh Srivastava, Cezar Pendus, Saswati Dana, Dinesh Garg, Achille Fokoue, G P Shrivatsa Bhargav, Dinesh Khandelwal, Srinivas Ravishankar, Sairam Gurajada, Maria Chang, Rosario Uceda-Sosa, Salim Roukos, Alexander Gray, Guilherme Lima, Ryan Riegel, Francois Luus, L Venkata Subramaniam
Specifically, our benchmark is a temporal question answering dataset with the following advantages: (a) it is based on Wikidata, which is the most frequently curated, openly available knowledge base, (b) it includes intermediate sparql queries to facilitate the evaluation of semantic parsing based approaches for KBQA, and (c) it generalizes to multiple knowledge bases: Freebase and Wikidata.
no code implementations • 28 Sep 2021 • Sumit Neelam, Udit Sharma, Hima Karanam, Shajith Ikbal, Pavan Kapanipathi, Ibrahim Abdelaziz, Nandana Mihindukulasooriya, Young-suk Lee, Santosh Srivastava, Cezar Pendus, Saswati Dana, Dinesh Garg, Achille Fokoue, G P Shrivatsa Bhargav, Dinesh Khandelwal, Srinivas Ravishankar, Sairam Gurajada, Maria Chang, Rosario Uceda-Sosa, Salim Roukos, Alexander Gray, Guilherme LimaRyan Riegel, Francois Luus, L Venkata Subramaniam
In addition, to demonstrate extensi-bility to additional reasoning types we evaluate on multi-hopreasoning datasets and a new Temporal KBQA benchmarkdataset on Wikidata, namedTempQA-WD1, introduced in thispaper.
no code implementations • 6 Sep 2021 • Sukannya Purkayastha, Saswati Dana, Dinesh Garg, Dinesh Khandelwal, G P Shrivatsa Bhargav
Experimental results show that the quality of generated SPARQL silhouette in the first stage is outstanding for the ideal scenarios but for realistic scenarios (i. e. noisy linker), the quality of the resulting SPARQL silhouette drops drastically.
no code implementations • AKBC Workshop CSKB 2021 • Shourya Aggarwal, Divyanshu Mandowara, Vishwajeet Agrawal, Dinesh Khandelwal, Parag Singla, Dinesh Garg
We human-annotate a first-of-its-kind dataset (called ECQA) of positive and negative properties, as well as free-flow explanations, for $11K$ QA pairs taken from the CQA dataset.
1 code implementation • Findings (ACL) 2021 • Pavan Kapanipathi, Ibrahim Abdelaziz, Srinivas Ravishankar, Salim Roukos, Alexander Gray, Ramon Astudillo, Maria Chang, Cristina Cornelio, Saswati Dana, Achille Fokoue, Dinesh Garg, Alfio Gliozzo, Sairam Gurajada, Hima Karanam, Naweed Khan, Dinesh Khandelwal, Young-suk Lee, Yunyao Li, Francois Luus, Ndivhuwo Makondo, Nandana Mihindukulasooriya, Tahira Naseem, Sumit Neelam, Lucian Popa, Revanth Reddy, Ryan Riegel, Gaetano Rossiello, Udit Sharma, G P Shrivatsa Bhargav, Mo Yu
Knowledge base question answering (KBQA)is an important task in Natural Language Processing.
1 code implementation • NeurIPS 2020 • Santosh Kumar Srivastava, Dinesh Khandelwal, Dhiraj Madan, Dinesh Garg, Hima Karanam, L Venkata Subramaniam
Our training runs 9-times faster than the original QE scheme on this task.
1 code implementation • NeurIPS 2019 • Dinesh Garg, Shajith Ikbal Mohamed, Santosh K. Srivastava, Harit Vishwakarma, Hima Karanam, L. Venkata Subramaniam
Statistical Relational Learning (SRL) methods are the most widely used techniques to generate distributional representations of the symbolic Knowledge Bases (KBs).
2 code implementations • ACL 2020 • Vittorio Castelli, Rishav Chakravarti, Saswati Dana, Anthony Ferritto, Radu Florian, Martin Franz, Dinesh Garg, Dinesh Khandelwal, Scott McCarley, Mike McCawley, Mohamed Nasr, Lin Pan, Cezar Pendus, John Pitrelli, Saurabh Pujar, Salim Roukos, Andrzej Sakrajda, Avirup Sil, Rosario Uceda-Sosa, Todd Ward, Rong Zhang
We introduce TechQA, a domain-adaptation question answering dataset for the technical support domain.
1 code implementation • ACL 2020 • Michael Glass, Alfio Gliozzo, Rishav Chakravarti, Anthony Ferritto, Lin Pan, G P Shrivatsa Bhargav, Dinesh Garg, Avirup Sil
BERT (Bidirectional Encoder Representations from Transformers) and related pre-trained Transformers have provided large gains across many language understanding tasks, achieving a new state-of-the-art (SOTA).
no code implementations • 20 Sep 2018 • Amar Prakash Azad, Dinesh Garg, Priyanka Agrawal, Arun Kumar
The goal behind Domain Adaptation (DA) is to leverage the labeled examples from a source domain so as to infer an accurate model in a target domain where labels are not available or in scarce at the best.
no code implementations • 30 Nov 2017 • Kshiteej Sheth, Dinesh Garg, Anirban Dasgupta
Near isometric orthogonal embeddings to lower dimensions are a fundamental tool in data science and machine learning.
no code implementations • EMNLP 2017 • Deepak P, Dinesh Garg, Shirish Shevade
The idea is that such a space mirrors semantic similarity among questions as well as answers, thereby enabling high quality retrieval.
no code implementations • 27 Dec 2016 • Vishal Kakkar, Shirish K. Shevade, S. Sundararajan, Dinesh Garg
Batch learning methods for solving the kernelized version of this problem suffer from scalability and may not result in sparse classifiers.
no code implementations • 25 Jan 2016 • Divya Padmanabhan, Satyanath Bhat, Dinesh Garg, Shirish Shevade, Y. Narahari
We study the problem of training an accurate linear regression model by procuring labels from multiple noisy crowd annotators, under a budget constraint.