Search Results for author: Willy Chung

Found 10 papers, 6 papers with code

Contrastive Learning for Inference in Dialogue

1 code implementation19 Oct 2023 Etsuko Ishii, Yan Xu, Bryan Wilie, Ziwei Ji, Holy Lovenia, Willy Chung, Pascale Fung

Inference, especially those derived from inductive processes, is a crucial component in our conversation to complement the information implicitly or explicitly conveyed by a speaker.

Contrastive Learning

InstructTODS: Large Language Models for End-to-End Task-Oriented Dialogue Systems

1 code implementation13 Oct 2023 Willy Chung, Samuel Cahyawijaya, Bryan Wilie, Holy Lovenia, Pascale Fung

We present InstructTODS, a novel off-the-shelf framework for zero-shot end-to-end task-oriented dialogue systems that can adapt to diverse domains without fine-tuning.

Dialogue State Tracking Informativeness +4

Cross-Lingual Cross-Age Group Adaptation for Low-Resource Elderly Speech Emotion Recognition

1 code implementation26 Jun 2023 Samuel Cahyawijaya, Holy Lovenia, Willy Chung, Rita Frieske, Zihan Liu, Pascale Fung

In this work, we analyze the transferability of emotion recognition across three different languages--English, Mandarin Chinese, and Cantonese; and 2 different age groups--adults and the elderly.

Data Augmentation Speech Emotion Recognition

InstructAlign: High-and-Low Resource Language Alignment via Continual Crosslingual Instruction Tuning

1 code implementation23 May 2023 Samuel Cahyawijaya, Holy Lovenia, Tiezheng Yu, Willy Chung, Pascale Fung

Our results demonstrate the effectiveness of InstructAlign in enabling the model to understand low-resource languages with limited parallel data while preventing catastrophic forgetting.

Learn What NOT to Learn: Towards Generative Safety in Chatbots

no code implementations21 Apr 2023 Leila Khalatbari, Yejin Bang, Dan Su, Willy Chung, Saeed Ghadimi, Hossein Sameti, Pascale Fung

Our approach differs from the standard contrastive learning framework in that it automatically obtains positive and negative signals from the safe and unsafe language distributions that have been learned beforehand.

Contrastive Learning

Clozer: Adaptable Data Augmentation for Cloze-style Reading Comprehension

no code implementations30 Mar 2022 Holy Lovenia, Bryan Wilie, Willy Chung, Min Zeng, Samuel Cahyawijaya, Su Dan, Pascale Fung

Task-adaptive pre-training (TAPT) alleviates the lack of labelled data and provides performance lift by adapting unlabelled data to downstream task.

Data Augmentation Machine Reading Comprehension +1

Cannot find the paper you are looking for? You can Submit a new open access paper.