Chinese Dependency Parsing
6 papers with code • 1 benchmarks • 0 datasets
Most implemented papers
Glyce: Glyph-vectors for Chinese Character Representations
However, due to the lack of rich pictographic evidence in glyphs and the weak generalization ability of standard computer vision models on character data, an effective way to utilize the glyph information remains to be found.
Efficient Second-Order TreeCRF for Neural Dependency Parsing
Experiments and analysis on 27 datasets from 13 languages clearly show that techniques developed before the DL era, such as structural learning (global TreeCRF loss) and high-order modeling are still useful, and can further boost parsing performance over the state-of-the-art biaffine parser, especially for partially annotated training data.
Hybrid Oracle: Making Use of Ambiguity in Transition-based Chinese Dependency Parsing
In the training of transition-based dependency parsers, an oracle is used to predict a transition sequence for a sentence and its gold tree.
Global Transition-based Non-projective Dependency Parsing
Shi, Huang, and Lee (2017) obtained state-of-the-art results for English and Chinese dependency parsing by combining dynamic-programming implementations of transition-based dependency parsers with a minimal set of bidirectional LSTM features.
Semi-supervised Domain Adaptation for Dependency Parsing
During the past decades, due to the lack of sufficient labeled data, most studies on cross-domain parsing focus on unsupervised domain adaptation, assuming there is no target-domain training data.
Character-Level Chinese Dependency Parsing via Modeling Latent Intra-Word Structure
Revealing the syntactic structure of sentences in Chinese poses significant challenges for word-level parsers due to the absence of clear word boundaries.