no code implementations • 31 Oct 2021 • Sarah E. Finch, James D. Finch, Daniil Huryn, William Hutsell, Xiaoyuan Huang, Han He, Jinho D. Choi
In the third and final stage, our bot selects a small subset of predicates and translates them into an English response.
1 code implementation • EMNLP 2021 • Han He, Jinho D. Choi
Multi-task learning with transformer encoders (MTL) has emerged as a powerful technique to improve performance on closely-related tasks for both accuracy and efficiency while a question still remains whether or not it would perform as well on tasks that are distinct in nature.
1 code implementation • 8 Sep 2021 • Han He, Liyan Xu, Jinho D. Choi
We introduce ELIT, the Emory Language and Information Toolkit, which is a comprehensive NLP framework providing transformer-based end-to-end models for core tasks with a special focus on memory efficiency while maintaining state-of-the-art accuracy and speed.
1 code implementation • ACL (IWPT) 2021 • Han He, Jinho D. Choi
Coupled with biaffine decoders, transformers have been effectively adapted to text-to-graph transduction and achieved state-of-the-art performance on AMR parsing.
Ranked #13 on
AMR Parsing
on LDC2017T10
no code implementations • 6 Aug 2020 • Leevi Raivio, Han He, Johanna Virkki, Heikki Huttunen
The data is collected sequentially, such that we record both the stroke order and the resulting bitmap.
no code implementations • WS 2020 • Han He, Jinho D. Choi
Our results show that models using the multilingual encoder outperform ones using the language specific encoders for most languages.
no code implementations • WS 2020 • Tae Hwan Oh, Ji Yoon Han, Hyonsu Choe, Seokwon Park, Han He, Jinho D. Choi, Na-Rae Han, Jena D. Hwang, Hansaem Kim
In this paper, we first open on important issues regarding the Penn Korean Universal Treebank (PKT-UD) and address these issues by revising the entire corpus manually with the aim of producing cleaner UD annotations that are more faithful to Korean grammar.
no code implementations • 2 Nov 2019 • Changmao Li, Han He, Yunze Hao, Caleb Ziems
This report assesses different machine learning approaches to 10-year survival prediction of breast cancer patients.
1 code implementation • 14 Aug 2019 • Han He, Jinho D. Choi
This paper presents new state-of-the-art models for three tasks, part-of-speech tagging, syntactic parsing, and semantic parsing, using the cutting-edge contextualized embedding framework known as BERT.
1 code implementation • 23 Dec 2017 • Han He, Lei Wu, Xiaokun Yang, Hua Yan, Zhimin Gao, Yi Feng, George Townsend
To build a concrete study and substantiate the efficiency of our neural architecture, we take Chinese Word Segmentation as a research case example.
1 code implementation • 7 Dec 2017 • Han He, Lei Wu, Hua Yan, Zhimin Gao, Yi Feng, George Townsend
We present a simple yet elegant solution to train a single joint model on multi-criteria corpora for Chinese Word Segmentation (CWS).