To this end, we propose KeBioLM, a biomedical pretrained language model that explicitly leverages knowledge from the UMLS knowledge bases.
Ranked #1 on Named Entity Recognition on BC2GM
Further analysis shows that Lattice-BERT can harness the lattice structures, and the improvement comes from the exploration of redundant information and multi-granularity representations.
Existing work in multilingual pretraining has demonstrated the potential of cross-lingual transferability by training a unified Transformer encoder for multiple languages.
In this paper, we explore the slot tagging with only a few labeled support sentences (a. k. a.
This paper describes our system (HIT-SCIR) for CoNLL 2019 shared task: Cross-Framework Meaning Representation Parsing.
Ranked #1 on UCCA Parsing on CoNLL 2019
Querying the knowledge base (KB) has long been a challenge in the end-to-end task-oriented dialogue system.
Ranked #3 on Task-Oriented Dialogue Systems on KVRET
In this approach, a linear transformation is learned from contextual word alignments to align the contextualized embeddings independently trained in different languages.
It calculates emission score with similarity based methods and obtains transition score with a specially designed transfer mechanism.
In this paper, we propose a new rich resource enhanced AMR aligner which produces multiple alignments and a new transition system for AMR parsing along with its oracle parser.
Ranked #1 on AMR Parsing on LDC2014T12
This paper describes our system (HIT-SCIR) submitted to the CoNLL 2018 shared task on Multilingual Parsing from Raw Text to Universal Dependencies.
Ranked #3 on Dependency Parsing on Universal Dependencies
Classic pipeline models for task-oriented dialogue system require explicit modeling the dialogue states and hand-crafted action spaces to query a domain-specific knowledge base.
Ranked #4 on Task-Oriented Dialogue Systems on KVRET
Many natural language processing tasks can be modeled into structured prediction and solved as a search problem.
Many natural language processing (NLP) tasks can be generalized into segmentation problem.