Semantic Parsing

381 papers with code • 20 benchmarks • 42 datasets

Semantic Parsing is the task of transducing natural language utterances into formal meaning representations. The target meaning representations can be defined according to a wide variety of formalisms. This include linguistically-motivated semantic representations that are designed to capture the meaning of any sentence such as λ-calculus or the abstract meaning representations. Alternatively, for more task-driven approaches to Semantic Parsing, it is common for meaning representations to represent executable programs such as SQL queries, robotic commands, smart phone instructions, and even general-purpose programming languages like Python and Java.

Source: Tranx: A Transition-based Neural Abstract Syntax Parser for Semantic Parsing and Code Generation

Libraries

Use these libraries to find Semantic Parsing models and implementations

Latest papers with no code

Beyond Alignment: Blind Video Face Restoration via Parsing-Guided Temporal-Coherent Transformer

no code yet • 21 Apr 2024

Multiple complex degradations are coupled in low-quality video faces in the real world.

Neural Semantic Parsing with Extremely Rich Symbolic Meaning Representations

no code yet • 19 Apr 2024

We introduce a neural "taxonomical" semantic parser to utilize this new representation system of predicates, and compare it with a standard neural semantic parser trained on the traditional meaning representation format, employing a novel challenge set and evaluation metric for evaluation.

Towards Compositionally Generalizable Semantic Parsing in Large Language Models: A Survey

no code yet • 15 Apr 2024

Compositional generalization is the ability of a model to generalize to complex, previously unseen types of combinations of entities from just having seen the primitives.

Gaining More Insight into Neural Semantic Parsing with Challenging Benchmarks

no code yet • 12 Apr 2024

The Parallel Meaning Bank (PMB) serves as a corpus for semantic processing with a focus on semantic parsing and text generation.

Self-Improvement Programming for Temporal Knowledge Graph Question Answering

no code yet • 2 Apr 2024

Temporal Knowledge Graph Question Answering (TKGQA) aims to answer questions with temporal intent over Temporal Knowledge Graphs (TKGs).

QueryAgent: A Reliable and Efficient Reasoning Framework with Environmental Feedback based Self-Correction

no code yet • 18 Mar 2024

Employing Large Language Models (LLMs) for semantic parsing has achieved remarkable success.

Interactive-KBQA: Multi-Turn Interactions for Knowledge Base Question Answering with Large Language Models

no code yet • 23 Feb 2024

For each category of complex question, we devised exemplars to guide LLMs through the reasoning processes.

Ar-Spider: Text-to-SQL in Arabic

no code yet • 22 Feb 2024

The baselines demonstrate decent single-language performance on our Arabic text-to-SQL dataset, Ar-Spider, achieving 62. 48% for S2SQL and 65. 57% for LGESQL, only 8. 79% below the highest results achieved by the baselines when trained in English dataset.

Training Table Question Answering via SQL Query Decomposition

no code yet • 19 Feb 2024

Table Question-Answering involves both understanding the natural language query and grounding it in the context of the input table to extract the relevant information.

Neural Models for Source Code Synthesis and Completion

no code yet • 8 Feb 2024

In this master thesis, we present sequence-to-sequence deep learning models and training paradigms to map NL to general-purpose programming languages that can assist users with suggestions of source code snippets, given a NL intent, and also extend auto-completion functionality of the source code to users while they are writing source code.