Search Results for author: Han He

Found 14 papers, 8 papers with code

Effective Neural Solution for Multi-Criteria Word Segmentation

1 code implementation7 Dec 2017 Han He, Lei Wu, Hua Yan, Zhimin Gao, Yi Feng, George Townsend

We present a simple yet elegant solution to train a single joint model on multi-criteria corpora for Chinese Word Segmentation (CWS).

Chinese Word Segmentation Sentence

Dual Long Short-Term Memory Networks for Sub-Character Representation Learning

1 code implementation23 Dec 2017 Han He, Lei Wu, Xiaokun Yang, Hua Yan, Zhimin Gao, Yi Feng, George Townsend

To build a concrete study and substantiate the efficiency of our neural architecture, we take Chinese Word Segmentation as a research case example.

Chinese Word Segmentation Representation Learning +1

Establishing Strong Baselines for the New Decade: Sequence Tagging, Syntactic and Semantic Parsing with BERT

1 code implementation14 Aug 2019 Han He, Jinho D. Choi

This paper presents new state-of-the-art models for three tasks, part-of-speech tagging, syntactic parsing, and semantic parsing, using the cutting-edge contextualized embedding framework known as BERT.

Part-Of-Speech Tagging Semantic Parsing

Ten-year Survival Prediction for Breast Cancer Patients

no code implementations2 Nov 2019 Changmao Li, Han He, Yunze Hao, Caleb Ziems

This report assesses different machine learning approaches to 10-year survival prediction of breast cancer patients.

BIG-bench Machine Learning Survival Prediction

Analysis of the Penn Korean Universal Dependency Treebank (PKT-UD): Manual Revision to Build Robust Parsing Model in Korean

no code implementations WS 2020 Tae Hwan Oh, Ji Yoon Han, Hyonsu Choe, Seokwon Park, Han He, Jinho D. Choi, Na-Rae Han, Jena D. Hwang, Hansaem Kim

In this paper, we first open on important issues regarding the Penn Korean Universal Treebank (PKT-UD) and address these issues by revising the entire corpus manually with the aim of producing cleaner UD annotations that are more faithful to Korean grammar.

Adaptation of Multilingual Transformer Encoder for Robust Enhanced Universal Dependency Parsing

no code implementations WS 2020 Han He, Jinho D. Choi

Our results show that models using the multilingual encoder outperform ones using the language specific encoders for most languages.

Dependency Parsing

Handwritten Character Recognition from Wearable Passive RFID

no code implementations6 Aug 2020 Leevi Raivio, Han He, Johanna Virkki, Heikki Huttunen

The data is collected sequentially, such that we record both the stroke order and the resulting bitmap.

Levi Graph AMR Parser using Heterogeneous Attention

1 code implementation ACL (IWPT) 2021 Han He, Jinho D. Choi

Coupled with biaffine decoders, transformers have been effectively adapted to text-to-graph transduction and achieved state-of-the-art performance on AMR parsing.

AMR Parsing

ELIT: Emory Language and Information Toolkit

1 code implementation8 Sep 2021 Han He, Liyan Xu, Jinho D. Choi

We introduce ELIT, the Emory Language and Information Toolkit, which is a comprehensive NLP framework providing transformer-based end-to-end models for core tasks with a special focus on memory efficiency while maintaining state-of-the-art accuracy and speed.

AMR Parsing Constituency Parsing +9

The Stem Cell Hypothesis: Dilemma behind Multi-Task Learning with Transformer Encoders

1 code implementation EMNLP 2021 Han He, Jinho D. Choi

Multi-task learning with transformer encoders (MTL) has emerged as a powerful technique to improve performance on closely-related tasks for both accuracy and efficiency while a question still remains whether or not it would perform as well on tasks that are distinct in nature.

Multi-Task Learning NER +1

DFEE: Interactive DataFlow Execution and Evaluation Kit

1 code implementation4 Dec 2022 Han He, Song Feng, Daniele Bonadiman, Yi Zhang, Saab Mansour

DataFlow has been emerging as a new paradigm for building task-oriented chatbots due to its expressive semantic representations of the dialogue tasks.

Benchmarking Scheduling

Widely Interpretable Semantic Representation: Frameless Meaning Representation for Broader Applicability

no code implementations12 Sep 2023 Lydia Feng, Gregor Williamson, Han He, Jinho D. Choi

Despite its strengths, AMR is not easily applied to languages or domains without predefined semantic frames, and its use of numbered arguments results in semantic role labels, which are not directly interpretable and are semantically overloaded for parsers.

Cannot find the paper you are looking for? You can Submit a new open access paper.