Search Results for author: Ting-Rui Chiang

Found 14 papers, 3 papers with code

Understanding In-Context Learning with a Pelican Soup Framework

no code implementations16 Feb 2024 Ting-Rui Chiang, Dani Yogatama

In this framework, we introduce (1) the notion of a common sense knowledge base, (2) a general formalism for natural language classification tasks, and the notion of (3) meaning association.

Common Sense Reasoning In-Context Learning +1

On Retrieval Augmentation and the Limitations of Language Model Training

no code implementations16 Nov 2023 Ting-Rui Chiang, Xinyan Velocity Yu, Joshua Robinson, Ollie Liu, Isabelle Lee, Dani Yogatama

Augmenting a language model (LM) with $k$-nearest neighbors ($k$NN) retrieval on its training data alone can decrease its perplexity, though the underlying reasons for this remain elusive.

Language Modelling Memorization +1

The Distributional Hypothesis Does Not Fully Explain the Benefits of Masked Language Model Pretraining

1 code implementation25 Oct 2023 Ting-Rui Chiang, Dani Yogatama

Via a synthetic dataset, our analysis suggests that distributional property indeed leads to the better sample efficiency of pretrained masked language models, but does not fully explain the generalization capability.

Language Modelling Masked Language Modeling +2

Breaking Down Multilingual Machine Translation

no code implementations Findings (ACL) 2022 Ting-Rui Chiang, Yi-Pei Chen, Yi-Ting Yeh, Graham Neubig

While multilingual training is now an essential ingredient in machine translation (MT) systems, recent work has demonstrated that it has different effects in different multilingual settings, such as many-to-one, one-to-many, and many-to-many learning.

Machine Translation Translation

Are you doing what I say? On modalities alignment in ALFRED

no code implementations12 Oct 2021 Ting-Rui Chiang, Yi-Ting Yeh, Ta-Chung Chi, Yau-Shian Wang

ALFRED is a recently proposed benchmark that requires a model to complete tasks in simulated house environments specified by instructions in natural language.

On a Benefit of Mask Language Modeling: Robustness to Simplicity Bias

no code implementations11 Oct 2021 Ting-Rui Chiang

Despite the success of pretrained masked language models (MLM), why MLM pretraining is useful is still a qeustion not fully answered.

Hate Speech Detection Language Modelling

Relating Neural Text Degeneration to Exposure Bias

no code implementations EMNLP (BlackboxNLP) 2021 Ting-Rui Chiang, Yun-Nung Chen

This work focuses on relating two mysteries in neural-based text generation: exposure bias, and text degeneration.

Language Modelling Text Generation

Why Can You Lay Off Heads? Investigating How BERT Heads Transfer

no code implementations14 Jun 2021 Ting-Rui Chiang, Yun-Nung Chen

Hence, the acceptable deduction of performance on the pre-trained task when distilling a model can be derived from the results, and we further compare the behavior of the pruned model before and after fine-tuning.

Transfer Learning

An Empirical Study of Content Understanding in Conversational Question Answering

1 code implementation24 Sep 2019 Ting-Rui Chiang, Hao-Tong Ye, Yun-Nung Chen

However, to best of our knowledge, two important questions for conversational comprehension research have not been well studied: 1) How well can the benchmark dataset reflect models' content understanding?

Conversational Question Answering

RAP-Net: Recurrent Attention Pooling Networks for Dialogue Response Selection

no code implementations21 Mar 2019 Chao-Wei Huang, Ting-Rui Chiang, Shang-Yu Su, Yun-Nung Chen

The response selection has been an emerging research topic due to the growing interest in dialogue modeling, where the goal of the task is to select an appropriate response for continuing dialogues.

Learning Multi-Level Information for Dialogue Response Selection by Highway Recurrent Transformer

no code implementations21 Mar 2019 Ting-Rui Chiang, Chao-Wei Huang, Shang-Yu Su, Yun-Nung Chen

With the increasing research interest in dialogue response generation, there is an emerging branch formulating this task as selecting next sentences, where given the partial dialogue contexts, the goal is to determine the most probable next sentence.

Response Generation Sentence

Semantically-Aligned Equation Generation for Solving and Reasoning Math Word Problems

1 code implementation NAACL 2019 Ting-Rui Chiang, Yun-Nung Chen

Solving math word problems is a challenging task that requires accurate natural language understanding to bridge natural language texts and math expressions.

Math Math Word Problem Solving +2

Cannot find the paper you are looking for? You can Submit a new open access paper.