Search Results for author: Chin-Yew Lin

Found 57 papers, 15 papers with code

On the Effectiveness of Sentence Encoding for Intent Detection Meta-Learning

1 code implementation NAACL 2022 Tingting Ma, Qianhui Wu, Zhiwei Yu, Tiejun Zhao, Chin-Yew Lin

Recent studies on few-shot intent detection have attempted to formulate the task as a meta-learning problem, where a meta-learning model is trained with a certain capability to quickly adapt to newly specified few-shot tasks with potentially unseen intent categories.

Intent Detection Meta-Learning +5

DesignProbe: A Graphic Design Benchmark for Multimodal Large Language Models

no code implementations23 Apr 2024 Jieru Lin, Danqing Huang, Tiejun Zhao, Dechen Zhan, Chin-Yew Lin

This complexity makes the comprehension of graphic design challenging, for it needs the capability to both recognize the design elements and understand the design.


LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression

1 code implementation19 Mar 2024 Zhuoshi Pan, Qianhui Wu, Huiqiang Jiang, Menglin Xia, Xufang Luo, Jue Zhang, QIngwei Lin, Victor Rühle, Yuqing Yang, Chin-Yew Lin, H. Vicky Zhao, Lili Qiu, Dongmei Zhang

The challenge is that information entropy may be a suboptimal compression metric: (i) it only leverages unidirectional context and may fail to capture all essential information needed for prompt compression; (ii) it is not aligned with the prompt compression objective.

GSM8K Language Modelling +3

Desigen: A Pipeline for Controllable Design Template Generation

no code implementations14 Mar 2024 Haohan Weng, Danqing Huang, Yu Qiao, Zheng Hu, Chin-Yew Lin, Tong Zhang, C. L. Philip Chen

In this paper, we present Desigen, an automatic template creation pipeline which generates background images as well as harmonious layout elements over the background.

Spot the Error: Non-autoregressive Graphic Layout Generation with Wireframe Locator

1 code implementation29 Jan 2024 Jieru Lin, Danqing Huang, Tiejun Zhao, Dechen Zhan, Chin-Yew Lin

Furthermore, based on our observation that pixel space is more sensitive in capturing spatial patterns of graphic layouts (e. g., overlap, alignment), we propose a learning-based locator to detect erroneous tokens which takes the wireframe image rendered from the generated layout sequence as input.


All Data on the Table: Novel Dataset and Benchmark for Cross-Modality Scientific Information Extraction

no code implementations14 Nov 2023 Yuhan Li, Jian Wu, Zhiwei Yu, Börje F. Karlsson, Wei Shen, Manabu Okumura, Chin-Yew Lin

To close this gap in data availability and enable cross-modality IE, while alleviating labeling costs, we propose a semi-supervised pipeline for annotating entities in text, as well as entities and relations in tables, in an iterative procedure.

LongLLMLingua: Accelerating and Enhancing LLMs in Long Context Scenarios via Prompt Compression

1 code implementation10 Oct 2023 Huiqiang Jiang, Qianhui Wu, Xufang Luo, Dongsheng Li, Chin-Yew Lin, Yuqing Yang, Lili Qiu

Inspired by these findings, we propose LongLLMLingua for prompt compression towards improving LLMs' perception of the key information to simultaneously address the three challenges.

Code Completion Few-Shot Learning

LLMLingua: Compressing Prompts for Accelerated Inference of Large Language Models

1 code implementation9 Oct 2023 Huiqiang Jiang, Qianhui Wu, Chin-Yew Lin, Yuqing Yang, Lili Qiu

Large language models (LLMs) have been applied in various applications due to their astonishing capabilities.

GSM8K In-Context Learning

CoLaDa: A Collaborative Label Denoising Framework for Cross-lingual Named Entity Recognition

1 code implementation24 May 2023 Tingting Ma, Qianhui Wu, Huiqiang Jiang, Börje F. Karlsson, Tiejun Zhao, Chin-Yew Lin

Cross-lingual named entity recognition (NER) aims to train an NER system that generalizes well to a target language by leveraging labeled data in a given source language.

Denoising Knowledge Distillation +3

Multi-Level Knowledge Distillation for Out-of-Distribution Detection in Text

1 code implementation21 Nov 2022 Qianhui Wu, Huiqiang Jiang, Haonan Yin, Börje F. Karlsson, Chin-Yew Lin

Self-supervised representation learning has proved to be a valuable component for out-of-distribution (OoD) detection with only the texts of in-distribution (ID) examples.

Knowledge Distillation Language Modelling +3

Disentangling Reasoning Capabilities from Language Models with Compositional Reasoning Transformers

no code implementations20 Oct 2022 Wanjun Zhong, Tingting Ma, Jiahai Wang, Jian Yin, Tiejun Zhao, Chin-Yew Lin, Nan Duan

This paper presents ReasonFormer, a unified reasoning framework for mirroring the modular and compositional reasoning process of humans in complex decision-making.

Decision Making

Rows from Many Sources: Enriching row completions from Wikidata with a pre-trained Language Model

no code implementations14 Apr 2022 Carina Negreanu, Alperen Karaoglu, Jack Williams, Shuang Chen, Daniel Fabian, Andrew Gordon, Chin-Yew Lin

The task divides into two steps: subject suggestion, the task of populating the main column; and gap filling, the task of populating the remaining columns.

Language Modelling Text Generation

Issues with Entailment-based Zero-shot Text Classification

1 code implementation ACL 2021 Tingting Ma, Jin-Ge Yao, Chin-Yew Lin, Tiejun Zhao

The general format of natural language inference (NLI) makes it tempting to be used for zero-shot text classification by casting any target label into a sentence of hypothesis and verifying whether or not it could be entailed by the input, aiming at generic classification applicable on any specified label space.

Natural Language Inference Sentence +3

CANVASEMB: Learning Layout Representation with Large-scale Pre-training for Graphic Design

no code implementations1 Jan 2021 Yuxi Xie, Danqing Huang, Jinpeng Wang, Chin-Yew Lin

Layout representation, which models visual elements in a canvas and their inter-relations, plays a crucial role in graphic design intelligence.

Image Captioning Multi-Task Learning +1

Learning Semantic Correspondences from Noisy Data-text Pairs by Local-to-Global Alignments

no code implementations COLING 2020 Feng Nie, Jinpeng Wang, Chin-Yew Lin

Large-scale datasets recently proposed for generation contain loosely corresponding data text pairs, where part of spans in text cannot be aligned to its incomplete paired input.

Data-to-Text Generation

Improving Entity Linking by Modeling Latent Entity Type Information

no code implementations6 Jan 2020 Shuang Chen, Jinpeng Wang, Feng Jiang, Chin-Yew Lin

Existing state of the art neural entity linking models employ attention-based bag-of-words context model and pre-trained entity embeddings bootstrapped from word embeddings to assess topic level context compatibility.

Ranked #2 on Entity Disambiguation on AIDA-CoNLL (Micro-F1 metric)

Entity Disambiguation Entity Embeddings +3

Enhanced Meta-Learning for Cross-lingual Named Entity Recognition with Minimal Resources

1 code implementation14 Nov 2019 Qianhui Wu, Zijia Lin, Guoxin Wang, Hui Chen, Börje F. Karlsson, Biqing Huang, Chin-Yew Lin

For languages with no annotated resources, transferring knowledge from rich-resource languages is an effective solution for named entity recognition (NER).

Cross-Lingual NER Meta-Learning +4

An Encoder with non-Sequential Dependency for Neural Data-to-Text Generation

no code implementations WS 2019 Feng Nie, Jinpeng Wang, Rong pan, Chin-Yew Lin

Data-to-text generation aims to generate descriptions given a structured input data (i. e., a table with multiple records).

Data-to-Text Generation

Measuring Numerical Common Sense: Is A Word Embedding Approach Effective?

no code implementations25 Sep 2019 Hiroaki Yamane, Chin-Yew Lin, Tatsuya Harada

To this end, we first used a crowdsourcing service to obtain sufficient data for a subjective agreement on numerical common sense.

Common Sense Reasoning regression +1

A Simple Recipe towards Reducing Hallucination in Neural Surface Realisation

no code implementations ACL 2019 Feng Nie, Jin-Ge Yao, Jinpeng Wang, Rong pan, Chin-Yew Lin

Recent neural language generation systems often \textit{hallucinate} contents (i. e., producing irrelevant or contradicted facts), especially when trained on loosely corresponding pairs of the input structure and text.

Hallucination Text Generation

Towards Improving Neural Named Entity Recognition with Gazetteers

1 code implementation ACL 2019 Tianyu Liu, Jin-Ge Yao, Chin-Yew Lin

Most of the recently proposed neural models for named entity recognition have been purely data-driven, with a strong emphasis on getting rid of the efforts for collecting external resources or designing hand-crafted features.

Ranked #14 on Named Entity Recognition (NER) on Ontonotes v5 (English) (using extra training data)

named-entity-recognition Named Entity Recognition +1

Learning Latent Semantic Annotations for Grounding Natural Language to Structured Data

1 code implementation EMNLP 2018 Guanghui Qin, Jin-Ge Yao, Xuening Wang, Jinpeng Wang, Chin-Yew Lin

Previous work on grounded language learning did not fully capture the semantics underlying the correspondences between structured world state representations and texts, especially those between numerical values and lexical terms.

Grounded language learning Text Generation

Operation-guided Neural Networks for High Fidelity Data-To-Text Generation

no code implementations EMNLP 2018 Feng Nie, Jinpeng Wang, Jin-Ge Yao, Rong pan, Chin-Yew Lin

Even though the generated texts are mostly fluent and informative, they often generate descriptions that are not consistent with the input structured data.

Data-to-Text Generation Decoder +2

Aggregated Semantic Matching for Short Text Entity Linking

no code implementations CONLL 2018 Feng Nie, Shuyan Zhou, Jing Liu, Jinpeng Wang, Chin-Yew Lin, Rong pan

The task of entity linking aims to identify concepts mentioned in a text fragments and link them to a reference knowledge base.

Card Games Entity Linking +2

Operations Guided Neural Networks for High Fidelity Data-To-Text Generation

1 code implementation8 Sep 2018 Feng Nie, Jinpeng Wang, Jin-Ge Yao, Rong pan, Chin-Yew Lin

Even though the generated texts are mostly fluent and informative, they often generate descriptions that are not consistent with the input structured data.

Data-to-Text Generation Decoder +2

Incorporating Consistency Verification into Neural Data-to-Document Generation

no code implementations15 Aug 2018 Feng Nie, Hailin Chen, Jinpeng Wang, Jin-Ge Yao, Chin-Yew Lin, Rong pan

Recent neural models for data-to-document generation have achieved remarkable progress in producing fluent and informative texts.

reinforcement-learning Reinforcement Learning (RL) +1

Neural Math Word Problem Solver with Reinforcement Learning

no code implementations COLING 2018 Danqing Huang, Jing Liu, Chin-Yew Lin, Jian Yin

Experimental results show that (1) The copy and alignment mechanism is effective to address the two issues; (2) Reinforcement learning leads to better performance than maximum likelihood on this task; (3) Our neural model is complementary to the feature-based model and their combination significantly outperforms the state-of-the-art results.

Feature Engineering Math +3

Using Intermediate Representations to Solve Math Word Problems

no code implementations ACL 2018 Danqing Huang, Jin-Ge Yao, Chin-Yew Lin, Qingyu Zhou, Jian Yin

To solve math word problems, previous statistical approaches attempt at learning a direct mapping from a problem description to its corresponding equation system.

Math Math Word Problem Solving

A Statistical Framework for Product Description Generation

no code implementations IJCNLP 2017 Jinpeng Wang, Yutai Hou, Jing Liu, Yunbo Cao, Chin-Yew Lin

We present in this paper a statistical framework that generates accurate and fluent product description from product attributes.

Attribute Data-to-Text Generation

Learning Fine-Grained Expressions to Solve Math Word Problems

no code implementations EMNLP 2017 Danqing Huang, Shuming Shi, Chin-Yew Lin, Jian Yin

This method learns the mappings between math concept phrases in math word problems and their math expressions from training data.

Math Math Word Problem Solving

List-only Entity Linking

no code implementations ACL 2017 Ying Lin, Chin-Yew Lin, Heng Ji

Traditional Entity Linking (EL) technologies rely on rich structures and properties in the target knowledge base (KB).

Entity Linking

Cannot find the paper you are looking for? You can Submit a new open access paper.