Search Results for author: Jin-Ge Yao

Found 18 papers, 4 papers with code

Leveraging Diverse Lexical Chains to Construct Essays for Chinese College Entrance Examination

no code implementations IJCNLP 2017 Liunian Li, Xiaojun Wan, Jin-Ge Yao, Siming Yan

In this work we study the challenging task of automatically constructing essays for Chinese college entrance examination where the topic is specified in advance.

Sentence

Using Intermediate Representations to Solve Math Word Problems

no code implementations ACL 2018 Danqing Huang, Jin-Ge Yao, Chin-Yew Lin, Qingyu Zhou, Jian Yin

To solve math word problems, previous statistical approaches attempt at learning a direct mapping from a problem description to its corresponding equation system.

Math Math Word Problem Solving

Incorporating Consistency Verification into Neural Data-to-Document Generation

no code implementations15 Aug 2018 Feng Nie, Hailin Chen, Jinpeng Wang, Jin-Ge Yao, Chin-Yew Lin, Rong pan

Recent neural models for data-to-document generation have achieved remarkable progress in producing fluent and informative texts.

reinforcement-learning Reinforcement Learning (RL) +1

Operations Guided Neural Networks for High Fidelity Data-To-Text Generation

1 code implementation8 Sep 2018 Feng Nie, Jinpeng Wang, Jin-Ge Yao, Rong pan, Chin-Yew Lin

Even though the generated texts are mostly fluent and informative, they often generate descriptions that are not consistent with the input structured data.

Data-to-Text Generation Quantization +1

On the Abstractiveness of Neural Document Summarization

no code implementations EMNLP 2018 Fangfang Zhang, Jin-Ge Yao, Rui Yan

Many modern neural document summarization systems based on encoder-decoder networks are designed to produce abstractive summaries.

Abstractive Text Summarization Document Summarization

Operation-guided Neural Networks for High Fidelity Data-To-Text Generation

no code implementations EMNLP 2018 Feng Nie, Jinpeng Wang, Jin-Ge Yao, Rong pan, Chin-Yew Lin

Even though the generated texts are mostly fluent and informative, they often generate descriptions that are not consistent with the input structured data.

Data-to-Text Generation Quantization +1

Learning Latent Semantic Annotations for Grounding Natural Language to Structured Data

1 code implementation EMNLP 2018 Guanghui Qin, Jin-Ge Yao, Xuening Wang, Jinpeng Wang, Chin-Yew Lin

Previous work on grounded language learning did not fully capture the semantics underlying the correspondences between structured world state representations and texts, especially those between numerical values and lexical terms.

Grounded language learning Text Generation

Towards Improving Neural Named Entity Recognition with Gazetteers

1 code implementation ACL 2019 Tianyu Liu, Jin-Ge Yao, Chin-Yew Lin

Most of the recently proposed neural models for named entity recognition have been purely data-driven, with a strong emphasis on getting rid of the efforts for collecting external resources or designing hand-crafted features.

Ranked #14 on Named Entity Recognition (NER) on Ontonotes v5 (English) (using extra training data)

named-entity-recognition Named Entity Recognition +1

A Simple Recipe towards Reducing Hallucination in Neural Surface Realisation

no code implementations ACL 2019 Feng Nie, Jin-Ge Yao, Jinpeng Wang, Rong pan, Chin-Yew Lin

Recent neural language generation systems often \textit{hallucinate} contents (i. e., producing irrelevant or contradicted facts), especially when trained on loosely corresponding pairs of the input structure and text.

Hallucination Text Generation

A Closer Look at Recent Results of Verb Selection for Data-to-Text NLG

no code implementations WS 2019 Guanyi Chen, Jin-Ge Yao

Automatic natural language generation systems need to use the contextually-appropriate verbs when describing different kinds of facts or events, which has triggered research interest on verb selection for data-to-text generation.

Data-to-Text Generation

Issues with Entailment-based Zero-shot Text Classification

1 code implementation ACL 2021 Tingting Ma, Jin-Ge Yao, Chin-Yew Lin, Tiejun Zhao

The general format of natural language inference (NLI) makes it tempting to be used for zero-shot text classification by casting any target label into a sentence of hypothesis and verifying whether or not it could be entailed by the input, aiming at generic classification applicable on any specified label space.

Natural Language Inference Sentence +3

Cannot find the paper you are looking for? You can Submit a new open access paper.