In this paper, we propose two models (i. e., a knowledge span prediction model and a response generation model) for the subtask1 and the subtask2.
In this paper, we propose the pipelined model (i. e., a resolution of anaphoric identity and a resolution of bridging references) for the subtask1 and the subtask2.
Sentence-level relation extraction mainly aims to classify the relation between two entities in a sentence.
Ranked #1 on Relation Extraction on Re-TACRED
In this paper, we propose Deep Contextualized Relation-Aware Network (DCRAN), which allows interactive relations among subtasks with deep contextual information based on two modules (i. e., Aspect and Opinion Propagation and Explicit Self-Supervised Strategies).
This work was conducted to find out how tokenization methods affect the training results of machine translation models.
Intrinsic evaluation by humans for the performance of natural language generation models is conducted to overcome the fact that the quality of generated sentences cannot be fully represented by only extrinsic evaluation.
Although there are sentences in a document that support important evidences for sentiment analysis and sentences that do not, they have treated the document as a bag of sentences.
Ranked #1 on Document Classification on IMDb-M
The proposed model finds n-to-1 subject-object relations using a forward object decoder.
Ranked #1 on Relation Extraction on ACE 2005 (Relation classification F1 metric)
The proposed model finds n-to-1 subject-object relations by using a forward de-coder called an object decoder.
Ranked #1 on Relation Extraction on ACE 2005 (Cross Sentence metric)
This paper describes our system, Joint Encoders for Stable Suggestion Inference (JESSI), for the SemEval 2019 Task 9: Suggestion Mining from Online Reviews and Forums.