Search Results for author: Dinesh Garg

Found 18 papers, 6 papers with code

Zero-shot Entity Linking with Less Data

2 code implementations Findings (NAACL) 2022 G P Shrivatsa Bhargav, Dinesh Khandelwal, Saswati Dana, Dinesh Garg, Pavan Kapanipathi, Salim Roukos, Alexander Gray, L Venkata Subramaniam

Interestingly, we discovered that BLINK exhibits diminishing returns, i. e., it reaches 98% of its performance with just 1% of the training data and the remaining 99% of the data yields only a marginal increase of 2% in the performance.

Entity Linking Multi-Task Learning +2

Read between the lines -- Functionality Extraction From READMEs

no code implementations15 Mar 2024 Prince Kumar, Srikanth Tamilselvam, Dinesh Garg

While text summarization is a well-known NLP task, in this paper, we introduce a novel and useful variant of it called functionality extraction from Git README files.

Code Summarization text2text-generation +2

Fill in the Blank: Exploring and Enhancing LLM Capabilities for Backward Reasoning in Math Word Problems

no code implementations3 Oct 2023 Aniruddha Deb, Neeva Oza, Sarthak Singla, Dinesh Khandelwal, Dinesh Garg, Parag Singla

Utilizing the specific format of this task, we propose three novel techniques that improve performance: Rephrase reformulates the given problem into a forward reasoning problem, PAL-Tools combines the idea of Program-Aided LLMs to produce a set of equations that can be solved by an external solver, and Check your Work exploits the availability of natural verifier of high accuracy in the forward direction, interleaving solving and verification steps.

GSM8K Math

Image Manipulation via Multi-Hop Instructions -- A New Dataset and Weakly-Supervised Neuro-Symbolic Approach

no code implementations23 May 2023 Harman Singh, Poorva Garg, Mohit Gupta, Kevin Shah, Ashish Goswami, Satyam Modi, Arnab Kumar Mondal, Dinesh Khandelwal, Dinesh Garg, Parag Singla

We are interested in image manipulation via natural language text -- a task that is useful for multiple AI applications but requires complex reasoning over multi-modal spaces.

Image Manipulation Question Answering +1

A Benchmark for Generalizable and Interpretable Temporal Question Answering over Knowledge Bases

no code implementations15 Jan 2022 Sumit Neelam, Udit Sharma, Hima Karanam, Shajith Ikbal, Pavan Kapanipathi, Ibrahim Abdelaziz, Nandana Mihindukulasooriya, Young-suk Lee, Santosh Srivastava, Cezar Pendus, Saswati Dana, Dinesh Garg, Achille Fokoue, G P Shrivatsa Bhargav, Dinesh Khandelwal, Srinivas Ravishankar, Sairam Gurajada, Maria Chang, Rosario Uceda-Sosa, Salim Roukos, Alexander Gray, Guilherme Lima, Ryan Riegel, Francois Luus, L Venkata Subramaniam

Specifically, our benchmark is a temporal question answering dataset with the following advantages: (a) it is based on Wikidata, which is the most frequently curated, openly available knowledge base, (b) it includes intermediate sparql queries to facilitate the evaluation of semantic parsing based approaches for KBQA, and (c) it generalizes to multiple knowledge bases: Freebase and Wikidata.

Knowledge Base Question Answering Semantic Parsing

Knowledge Graph Question Answering via SPARQL Silhouette Generation

no code implementations6 Sep 2021 Sukannya Purkayastha, Saswati Dana, Dinesh Garg, Dinesh Khandelwal, G P Shrivatsa Bhargav

Experimental results show that the quality of generated SPARQL silhouette in the first stage is outstanding for the ideal scenarios but for realistic scenarios (i. e. noisy linker), the quality of the resulting SPARQL silhouette drops drastically.

Graph Question Answering Knowledge Graphs +3

Explanations for CommonsenseQA: New Dataset and Models

no code implementations AKBC Workshop CSKB 2021 Shourya Aggarwal, Divyanshu Mandowara, Vishwajeet Agrawal, Dinesh Khandelwal, Parag Singla, Dinesh Garg

We human-annotate a first-of-its-kind dataset (called ECQA) of positive and negative properties, as well as free-flow explanations, for $11K$ QA pairs taken from the CQA dataset.

Common Sense Reasoning Explanation Generation +4

Quantum Embedding of Knowledge for Reasoning

1 code implementation NeurIPS 2019 Dinesh Garg, Shajith Ikbal Mohamed, Santosh K. Srivastava, Harit Vishwakarma, Hima Karanam, L. Venkata Subramaniam

Statistical Relational Learning (SRL) methods are the most widely used techniques to generate distributional representations of the symbolic Knowledge Bases (KBs).

Logical Reasoning Relational Reasoning

Span Selection Pre-training for Question Answering

1 code implementation ACL 2020 Michael Glass, Alfio Gliozzo, Rishav Chakravarti, Anthony Ferritto, Lin Pan, G P Shrivatsa Bhargav, Dinesh Garg, Avirup Sil

BERT (Bidirectional Encoder Representations from Transformers) and related pre-trained Transformers have provided large gains across many language understanding tasks, achieving a new state-of-the-art (SOTA).

Language Modelling Memorization +4

Deep Domain Adaptation under Deep Label Scarcity

no code implementations20 Sep 2018 Amar Prakash Azad, Dinesh Garg, Priyanka Agrawal, Arun Kumar

The goal behind Domain Adaptation (DA) is to leverage the labeled examples from a source domain so as to infer an accurate model in a target domain where labels are not available or in scarce at the best.

Domain Adaptation Transductive Learning

Improved Linear Embeddings via Lagrange Duality

no code implementations30 Nov 2017 Kshiteej Sheth, Dinesh Garg, Anirban Dasgupta

Near isometric orthogonal embeddings to lower dimensions are a fundamental tool in data science and machine learning.

Latent Space Embedding for Retrieval in Question-Answer Archives

no code implementations EMNLP 2017 Deepak P, Dinesh Garg, Shirish Shevade

The idea is that such a space mirrors semantic similarity among questions as well as answers, thereby enabling high quality retrieval.

Question Answering Retrieval +4

A Sparse Nonlinear Classifier Design Using AUC Optimization

no code implementations27 Dec 2016 Vishal Kakkar, Shirish K. Shevade, S. Sundararajan, Dinesh Garg

Batch learning methods for solving the kernelized version of this problem suffer from scalability and may not result in sparse classifiers.

A Robust UCB Scheme for Active Learning in Regression from Strategic Crowds

no code implementations25 Jan 2016 Divya Padmanabhan, Satyanath Bhat, Dinesh Garg, Shirish Shevade, Y. Narahari

We study the problem of training an accurate linear regression model by procuring labels from multiple noisy crowd annotators, under a budget constraint.

Active Learning regression +1

Cannot find the paper you are looking for? You can Submit a new open access paper.