DRS Parsing
5 papers with code • 2 benchmarks • 0 datasets
Discourse Representation Structures (DRS) are formal meaning representations introduced by Discourse Representation Theory. DRS parsing is a complex task, comprising other NLP tasks, such as semantic role labeling, word sense disambiguation, co-reference resolution and named entity tagging. Also, DRSs show explicit scope for certain operators, which allows for a more principled and linguistically motivated treatment of negation, modals and quantification, as has been advocated in formal semantics. Moreover, DRSs can be translated to formal logic, which allows for automatic forms of inference by third parties.
Description from NLP Progress
Most implemented papers
Character-level Representations Improve DRS-based Semantic Parsing Even in the Age of BERT
We combine character-level and contextual language model representations to improve performance on Discourse Representation Structure parsing.
Exploring Neural Methods for Parsing Discourse Representation Structures
Neural methods have had several recent successes in semantic parsing, though they have yet to face the challenge of producing meaning representations based on formal semantics.
A Top-Down Neural Architecture towards Text-Level Parsing of Discourse Rhetorical Structure
Due to its great importance in deep natural language understanding and various down-stream applications, text-level parsing of discourse rhetorical structure (DRS) has been drawing more and more attention in recent years.
Adversarial Learning for Discourse Rhetorical Structure Parsing
Text-level discourse rhetorical structure (DRS) parsing is known to be challenging due to the notorious lack of training data.
Pre-Trained Language-Meaning Models for Multilingual Parsing and Generation
Pre-trained language models (PLMs) have achieved great success in NLP and have recently been used for tasks in computational semantics.