Nested entities are observed in many domains due to their compositionality, which cannot be easily recognized by the widely-used sequence labeling framework.
Ranked #1 on Nested Named Entity Recognition on ACE 2004
Event argument extraction (EAE) is an important task for information extraction to discover specific argument roles.
Recent pretrained language models extend from millions to billions of parameters.
Most existing NER methods rely on extensive labeled data for model training, which struggles in the low-resource scenarios with limited training data.
Large-scale pre-trained language models have contributed significantly to natural language processing by demonstrating remarkable abilities as few-shot learners.
1 code implementation • 15 Jun 2021 • Ningyu Zhang, Mosha Chen, Zhen Bi, Xiaozhuan Liang, Lei LI, Xin Shang, Kangping Yin, Chuanqi Tan, Jian Xu, Fei Huang, Luo Si, Yuan Ni, Guotong Xie, Zhifang Sui, Baobao Chang, Hui Zong, Zheng Yuan, Linfeng Li, Jun Yan, Hongying Zan, Kunli Zhang, Buzhou Tang, Qingcai Chen
Artificial Intelligence (AI), along with the recent progress in biomedical language understanding, is gradually changing medical practice.
Ranked #1 on Medical Relation Extraction on CMeIE
Specifically, we leverage an encoder module to capture the context information of entities and a U-shaped segmentation module over the image-style feature map to capture global interdependency among triples.
Ranked #4 on Relation Extraction on DocRED
To this end, we propose KeBioLM, a biomedical pretrained language model that explicitly leverages knowledge from the UMLS knowledge bases.
Ranked #1 on Named Entity Recognition on BC2GM
Recently, prompt-tuning has achieved promising results for certain few-shot classification tasks.
Ranked #4 on Dialog Relation Extraction on DialogRE (F1 (v1) metric)
We introduce a Poincare probe, a structural probe projecting these embeddings into a Poincare subspace with explicitly defined hierarchies.
Recent neural-based relation extraction approaches, though achieving promising improvement on benchmark datasets, have reported their vulnerability towards adversarial attacks.
Automatic Question Answering (QA) has been successfully applied in various domains such as search engines and chatbots.
With the TreeCRF we achieve a uniform way to jointly model the observed and the latent nodes.
Ranked #6 on Nested Named Entity Recognition on ACE 2005
In the CTRP framework, a model takes a PICO-formatted clinical trial proposal with its background as input and predicts the result, i. e. how the Intervention group compares with the Comparison group in terms of the measured Outcome in the studied Population.
In this paper, we revisit the end-to-end triple extraction task for sequence generation.
Ranked #6 on Relation Extraction on NYT
Different functional areas of the human brain play different roles in brain activity, which has not been paid sufficient research attention in the brain-computer interface (BCI) field.
As a new classification platform, deep learning has recently received increasing attention from researchers and has been successfully applied to many domains.
First, we model cognitive events based on EEG data by characterizing the data using EEG optical flow, which is designed to preserve multimodal EEG information in a uniform representation.
Herein, we propose a novel approach to modeling cognitive events from EEG data by reducing it to a video classification problem, which is designed to preserve the multimodal information of EEG.
Modeling sentence pairs plays the vital role for judging the relationship between two sentences, such as paraphrase identification, natural language inference, and answer sentence selection.
Ranked #9 on Paraphrase Identification on Quora Question Pairs (Accuracy metric)
We build the answer extraction model with state-of-the-art neural networks for single passage reading comprehension, and propose an additional task of passage ranking to help answer extraction in multiple passages.
The key idea is to search sentences similar to a query from Wikipedia articles and directly use the human-annotated entities in the similar sentences as candidate entities for the query.
Automatic question generation aims to generate questions from a text passage where the generated questions can be answered by certain sub-spans of the given passage.
Ranked #10 on Question Generation on SQuAD1.1