no code implementations • ACL (dialdoc) 2021 • Boeun Kim, Dohaeng Lee, Sihyung Kim, Yejin Lee, Jin-Xia Huang, Oh-Woog Kwon, Harksoo Kim
In this paper, we propose two models (i. e., a knowledge span prediction model and a response generation model) for the subtask1 and the subtask2.
no code implementations • ACL (CODI, CRAC) 2021 • Hongjin Kim, Damrin Kim, Harksoo Kim
In this paper, we propose the pipelined model (i. e., a resolution of anaphoric identity and a resolution of bridging references) for the subtask1 and the subtask2.
no code implementations • 20 Jul 2021 • Seongsik Park, Harksoo Kim
Sentence-level relation extraction mainly aims to classify the relation between two entities in a sentence.
Ranked #1 on
Relation Extraction
on Re-TACRED
no code implementations • ACL 2021 • Shinhyeok Oh, Dongyub Lee, Taesun Whang, IlNam Park, Gaeun Seo, EungGyun Kim, Harksoo Kim
In this paper, we propose Deep Contextualized Relation-Aware Network (DCRAN), which allows interactive relations among subtasks with deep contextual information based on two modules (i. e., Aspect and Opinion Propagation and Explicit Self-Supervised Strategies).
1 code implementation • 29 May 2021 • Dojun Park, Youngjin Jang, Harksoo Kim
This work was conducted to find out how tokenization methods affect the training results of machine translation models.
no code implementations • 29 May 2021 • Dojun Park, Youngjin Jang, Harksoo Kim
Intrinsic evaluation by humans for the performance of natural language generation models is conducted to overcome the fact that the quality of generated sentences cannot be fully represented by only extrinsic evaluation.
no code implementations • 9 Mar 2021 • Gihyeon Choi, Shinhyeok Oh, Harksoo Kim
Although there are sentences in a document that support important evidences for sentiment analysis and sentences that do not, they have treated the document as a bag of sentences.
Ranked #1 on
Document Classification
on IMDb-M
no code implementations • 5 Mar 2021 • Seongsik Park, Harksoo Kim
The proposed model finds n-to-1 subject-object relations using a forward object decoder.
Ranked #1 on
Relation Extraction
on ACE 2005
(Relation classification F1 metric)
no code implementations • WS 2019 • Seong Sik Park, Harksoo Kim
The proposed model finds n-to-1 subject-object relations by using a forward de-coder called an object decoder.
Ranked #1 on
Relation Extraction
on ACE 2005
(Cross Sentence metric)
no code implementations • SEMEVAL 2019 • Cheoneum Park, Juae Kim, Hyeon-gu Lee, Reinald Kim Amplayo, Harksoo Kim, Jungyun Seo, Chang-Ki Lee
This paper describes our system, Joint Encoders for Stable Suggestion Inference (JESSI), for the SemEval 2019 Task 9: Suggestion Mining from Online Reviews and Forums.