Transition-Based Dependency Parsing
12 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Transition-Based Dependency Parsing
Latest papers with no code
Structured Sentiment Analysis as Transition-based Dependency Parsing
Structured sentiment analysis (SSA) aims to automatically extract people's opinions from a text in natural language and adequately represent that information in a graph structure.
Greedy Transition-Based Dependency Parsing with Discrete and Continuous Supertag Features
We study the effect of rich supertag features in greedy transition-based dependency parsing.
Transition-Based Dependency Parsing using Perceptron Learner
Syntactic parsing using dependency structures has become a standard technique in natural language processing with many different parsing models, in particular data-driven models that can be trained on syntactically annotated corpora.
Vietnamese transition-based dependency parsing with supertag features
In recent years, dependency parsing is a fascinating research topic and has a lot of applications in natural language processing.
Neural Syntactic Generative Models with Exact Marginalization
We present neural syntactic generative models with exact marginalization that support both dependency parsing and language modeling.
Cache Transition Systems for Graph Parsing
Motivated by the task of semantic parsing, we describe a transition system that generalizes standard transition-based dependency parsing techniques to generate a graph rather than a tree.
Incremental Graph-based Neural Dependency Parsing
Very recently, some studies on neural dependency parsers have shown advantage over the traditional ones on a wide variety of languages.
Stack-based Multi-layer Attention for Transition-based Dependency Parsing
Although sequence-to-sequence (seq2seq) network has achieved significant success in many NLP tasks such as machine translation and text summarization, simply applying this approach to transition-based dependency parsing cannot yield a comparable performance gain as in other state-of-the-art methods, such as stack-LSTM and head selection.