1 code implementation • EMNLP 2021 • Po-Nien Kung, Sheng-Siang Yin, Yi-Cheng Chen, Tse-Hsuan Yang, Yun-Nung Chen
Multi-task auxiliary learning utilizes a set of relevant auxiliary tasks to improve the performance of a primary task.
1 code implementation • 2 Oct 2024 • Chao-Wei Huang, Yun-Nung Chen
In this paper, we address this gap by proposing FactAlign, a novel alignment framework designed to enhance the factuality of LLMs' long-form responses while maintaining their helpfulness.
1 code implementation • 2 Oct 2024 • Chao-Wei Huang, Yun-Nung Chen
Effective information retrieval (IR) from vast datasets relies on advanced techniques to extract relevant information in response to queries.
no code implementations • 27 Sep 2024 • Yung-Yu Shih, Ziwei Xu, Hiroya Takamura, Yun-Nung Chen, Chung-Chi Chen
Question answering (QA) has been a long-standing focus in the NLP field, predominantly addressing reading comprehension and common sense QA.
no code implementations • 15 Sep 2024 • Chao-Han Huck Yang, Taejin Park, Yuan Gong, Yuanchao Li, Zhehuai Chen, Yen-Ting Lin, Chen Chen, Yuchen Hu, Kunal Dhawan, Piotr Żelasko, Chao Zhang, Yun-Nung Chen, Yu Tsao, Jagadeesh Balam, Boris Ginsburg, Sabato Marco Siniscalchi, Eng Siong Chng, Peter Bell, Catherine Lai, Shinji Watanabe, Andreas Stolcke
Given recent advances in generative AI technology, a key question is how large language models (LLMs) can enhance acoustic modeling tasks using text decoding results from a frozen, pretrained automatic speech recognition (ASR) model.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
1 code implementation • 5 Aug 2024 • Zhi Rui Tam, Cheng-Kuang Wu, Yi-Lin Tsai, Chieh-Yen Lin, Hung-Yi Lee, Yun-Nung Chen
Structured generation, the process of producing content in standardized formats like JSON and XML, is widely utilized in real-world applications to extract key output information from large language models (LLMs).
1 code implementation • 20 Jul 2024 • Cheng-Kuang Wu, Zhi Rui Tam, Chao-Chung Wu, Chieh-Yen Lin, Hung-Yi Lee, Yun-Nung Chen
This study explores the proactive ability of LLMs to seek user support.
1 code implementation • 4 Jul 2024 • Chang-Sheng Kao, Yun-Nung Chen
Recent advancements in dialogue systems have highlighted the significance of integrating multimodal responses, which enable conveying ideas through diverse modalities rather than solely relying on text-based interactions.
1 code implementation • 4 Jul 2024 • Hsin-Yu Chang, Pei-Yu Chen, Tun-Hsiang Chou, Chang-Sheng Kao, Hsuan-Yun Yu, Yen-Ting Lin, Yun-Nung Chen
This paper provides a detailed survey of synthetic data techniques.
no code implementations • 1 Jul 2024 • Tzu-Han Lin, Chen-An Li, Hung-Yi Lee, Yun-Nung Chen
Reinforcement learning from human feedback (RLHF) is a popular strategy for aligning large language models (LLMs) with desired behaviors.
1 code implementation • 13 Jun 2024 • Cheng-Kuang Wu, Zhi Rui Tam, Chieh-Yen Lin, Yun-Nung Chen, Hung-Yi Lee
Recent works have shown that large language model (LLM) agents are able to improve themselves from experience, which is an important ability for continuous enhancement post-deployment.
no code implementations • 3 Jun 2024 • Tzu-Lin Kuo, Tzu-Wei Chiu, Tzung-Sheng Lin, Sheng-Yang Wu, Chao-Wei Huang, Yun-Nung Chen
By examining state-of-the-art GR techniques and their applications, this survey aims to provide a foundational understanding of GR and inspire further innovations in this transformative approach to information retrieval.
no code implementations • 3 Jun 2024 • Ji-Lun Peng, Sijia Cheng, Egil Diau, Yung-Yu Shih, Po-Heng Chen, Yen-Ting Lin, Yun-Nung Chen
We proposed the two-stage framework: from ``core ability'' to ``agent'', clearly explaining how LLMs can be applied based on their specific capabilities, along with the evaluation methods in each stage.
1 code implementation • 3 Jun 2024 • Cheng-Hsun Hsueh, Paul Kuo-Ming Huang, Tzu-Han Lin, Che-Wei Liao, Hung-Chieh Fang, Chao-Wei Huang, Yun-Nung Chen
To foster future research, we have released the complementary materials such as paper collection publicly at https://github. com/MiuLab/EditLLM-Survey
1 code implementation • 3 Jun 2024 • Yu-Min Tseng, Yu-Chao Huang, Teng-Yun Hsiao, Wei-Lin Chen, Chao-Wei Huang, Yu Meng, Yun-Nung Chen
The concept of persona, originally adopted in dialogue literature, has re-surged as a promising framework for tailoring large language models (LLMs) to specific context (e. g., personalized search, LLM-as-a-judge).
no code implementations • 29 Apr 2024 • Wen-Yu Chang, Yun-Nung Chen
This model excels in transitioning topics, understanding user intents, and selecting appropriate strategies.
2 code implementations • 29 Mar 2024 • Po-Heng Chen, Sijia Cheng, Wei-Lin Chen, Yen-Ting Lin, Yun-Nung Chen
We present TMLU, a holistic evaluation suit tailored for assessing the advanced knowledge and reasoning capability in LLMs, under the context of Taiwanese Mandarin.
1 code implementation • 25 Mar 2024 • Chao-Wei Huang, Yun-Nung Chen
This paper introduces InstUPR, an unsupervised passage reranking method based on large language models (LLMs).
1 code implementation • 6 Mar 2024 • Chao-Wei Huang, Chen-An Li, Tsu-Yuan Hsu, Chen-Yu Hsu, Yun-Nung Chen
Dense retrieval methods have demonstrated promising performance in multilingual information retrieval, where queries and documents can be in different languages.
2 code implementations • 29 Nov 2023 • Yen-Ting Lin, Yun-Nung Chen
Leveraging a comprehensive pretraining corpus and instruction-finetuning datasets, we have developed a model that not only understands the complexities of Traditional Chinese but also embodies the cultural context of Taiwan.
1 code implementation • 13 Sep 2023 • Chao-Wei Huang, Chen-Yu Hsu, Tsu-Yuan Hsu, Chen-An Li, Yun-Nung Chen
Conversational search provides a natural interface for information retrieval (IR).
no code implementations • 28 Aug 2023 • Wen-Yu Chang, Yun-Nung Chen
In recent research on dialogue systems and corpora, there has been a significant focus on two distinct categories: task-oriented (TOD) and open-domain (chit-chat) dialogues.
1 code implementation • 9 Jun 2023 • Ze-Song Xu, Yun-Nung Chen
Overall, our findings highlight the potential for this method to enhance the scalability and practicality of DRE systems.
1 code implementation • 24 May 2023 • Wei-Lin Chen, Cheng-Kuang Wu, Yun-Nung Chen, Hsin-Hsi Chen
Finally, we perform ICL for the test input with the pseudo-input-label pairs as demonstrations.
no code implementations • 23 May 2023 • Yen-Ting Lin, Yun-Nung Chen
We propose LLM-Eval, a unified multi-dimensional automatic evaluation method for open-domain conversations with large language models (LLMs).
1 code implementation • 17 Nov 2022 • Hung-Chieh Fang, Kuo-Han Hung, Chao-Wei Huang, Yun-Nung Chen
Open-domain conversational question answering can be viewed as two tasks: passage retrieval and conversational question answering, where the former relies on selecting candidate passages from a large corpus and the latter requires better understanding of a question with contexts to predict the answers.
1 code implementation • 12 Oct 2022 • Hui-Chi Kuo, Yun-Nung Chen
Intelligent virtual assistants are currently designed to perform tasks or services explicitly mentioned by users, so multiple related domains or tasks need to be performed one by one through a long conversation with many explicit intents.
1 code implementation • SIGDIAL (ACL) 2022 • Chun-Mao Lai, Ming-Hao Hsu, Chao-Wei Huang, Yun-Nung Chen
Prior work has demonstrated that data augmentation is useful for improving dialogue state tracking.
2 code implementations • NAACL (ClinicalNLP) 2022 • Chao-Wei Huang, Shang-Chi Tsai, Yun-Nung Chen
Prior work has shown that pretrained language models underperformed on this task with the regular finetuning scheme.
no code implementations • 16 May 2022 • Yen-Ting Lin, Hui-Chi Kuo, Ze-Song Xu, Ssu Chiu, Chieh-Chi Hung, Yi-Cheng Chen, Chao-Wei Huang, Yun-Nung Chen
This paper introduces Miutsu, National Taiwan University's Alexa Prize TaskBot, which is designed to assist users in completing tasks requiring multiple steps and decisions in two different domains -- home improvement and cooking.
1 code implementation • 2 May 2022 • Ya-Hsin Chang, Yun-Nung Chen
Spoken language understanding (SLU) is an essential task for machines to understand human speech for better interactions.
no code implementations • 25 Apr 2022 • Chao-Wei Huang, Kai-Chou Yang, Zi-Yuan Chen, Hao-Chien Cheng, Po-Yu Wu, Yu-Yang Huang, Chung-Kai Hsieh, Geng-Zhi Wildsky Fann, Ting-Yin Cheng, Ethan Tu, Yun-Nung Chen
With thousands of news articles from hundreds of sources distributed and shared every day, news consumption and information acquisition have been increasingly difficult for readers.
1 code implementation • ACL 2022 • Ssu Chiu, Maolin Li, Yen-Ting Lin, Yun-Nung Chen
The first one focuses on chatting with users and making them engage in the conversations, where selecting a proper topic to fit the dialogue context is essential for a successful dialogue.
no code implementations • 27 Mar 2022 • Ting-Chun Wang, Shang-Yu Su, Yun-Nung Chen
CRS is a complex problem that consists of two main tasks: (1) recommendation and (2) response generation.
no code implementations • 2 Dec 2021 • Jia-Yan Wu, Alexander Te-Wei Shieh, Shih-Ju Hsu, Yun-Nung Chen
Machine-generated citation sentences can aid automated scientific literature review and assist article writing.
no code implementations • 11 Oct 2021 • Po-Nien Kung, Chung-Cheng Chang, Tse-Hsuan Yang, Hsin-Kai Hsu, Yu-Jia Liou, Yun-Nung Chen
Task-oriented dialogue systems have been a promising area in the NLP field.
no code implementations • EMNLP (BlackboxNLP) 2021 • Ting-Rui Chiang, Yun-Nung Chen
This work focuses on relating two mysteries in neural-based text generation: exposure bias, and text degeneration.
1 code implementation • SIGDIAL (ACL) 2022 • Po-Wei Lin, Shang-Yu Su, Yun-Nung Chen
The goal of dialogue relation extraction (DRE) is to identify the relation between two entities in a given dialogue.
1 code implementation • NAACL 2021 • Shang-Chi Tsai, Chao-Wei Huang, Yun-Nung Chen
To address this problem, we propose a two-stage framework to improve automatic ICD coding by capturing the label correlation.
no code implementations • 14 Jun 2021 • Ting-Rui Chiang, Yun-Nung Chen
Hence, the acceptable deduction of performance on the pre-trained task when distilling a model can be derived from the results, and we further compare the behavior of the pruned model before and after fine-tuning.
1 code implementation • ACL (WOAH) 2021 • Yung-Sung Chuang, Mingye Gao, Hongyin Luo, James Glass, Hung-Yi Lee, Yun-Nung Chen, Shang-Wen Li
Automatic detection of toxic language plays an essential role in protecting social media users, especially minority groups, from verbal abuse.
1 code implementation • 27 Jan 2021 • Yen-Ting Lin, Yun-Nung Chen
There has been a rapid development in data-driven task-oriented dialogue systems with the benefit of large-scale datasets.
1 code implementation • NeurIPS 2020 • Chun-Hsing Lin, Siang-Ruei Wu, Hung-Yi Lee, Yun-Nung Chen
Score function-based natural language generation (NLG) approaches such as REINFORCE, in general, suffer from low sample efficiency and training instability problems.
1 code implementation • 27 Nov 2020 • Chun-Hsing Lin, Siang-Ruei Wu, Hung-Yi Lee, Yun-Nung Chen
Score function-based natural language generation (NLG) approaches such as REINFORCE, in general, suffer from low sample efficiency and training instability problems.
no code implementations • 12 Nov 2020 • Chulaka Gunasekara, Seokhwan Kim, Luis Fernando D'Haro, Abhinav Rastogi, Yun-Nung Chen, Mihail Eric, Behnam Hedayatnia, Karthik Gopalakrishnan, Yang Liu, Chao-Wei Huang, Dilek Hakkani-Tür, Jinchao Li, Qi Zhu, Lingxiao Luo, Lars Liden, Kaili Huang, Shahin Shayandeh, Runze Liang, Baolin Peng, Zheng Zhang, Swadheen Shukla, Minlie Huang, Jianfeng Gao, Shikib Mehri, Yulan Feng, Carla Gordon, Seyed Hossein Alavi, David Traum, Maxine Eskenazi, Ahmad Beirami, Eunjoon, Cho, Paul A. Crook, Ankita De, Alborz Geramifard, Satwik Kottur, Seungwhan Moon, Shivani Poddar, Rajen Subba
Interactive evaluation of dialog, and 4.
1 code implementation • 2 Nov 2020 • Chao-Wei Huang, Yun-Nung Chen
It is shown that encoding lattices as opposed to 1-best results generated by automatic speech recognizer (ASR) boosts the performance of spoken language understanding (SLU).
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Po-Nien Kung, Tse-Hsuan Yang, Yi-Cheng Chen, Sheng-Siang Yin, Yun-Nung Chen
Extracting rationales can help human understand which information the model utilizes and how it makes the prediction towards better interpretability.
no code implementations • 28 Oct 2020 • Boyo Chen, Buo-Fu Chen, Yun-Nung Chen
Analyzing big geophysical observational data collected by multiple advanced sensors on various satellite platforms promotes our understanding of the geophysical system.
1 code implementation • EMNLP 2020 • Yu-An Wang, Yun-Nung Chen
This paper focuses on providing a new insight of pre-trained position embeddings through feature-level analysis and empirical experiments on most of iconic NLP tasks.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Shang-Yu Su, Yung-Sung Chuang, Yun-Nung Chen
Natural language understanding (NLU) and Natural language generation (NLG) tasks hold a strong dual relationship, where NLU aims at predicting semantic labels based on natural language utterances and NLG does the opposite.
1 code implementation • EMNLP 2020 • Yung-Sung Chuang, Shang-Yu Su, Yun-Nung Chen
It is challenging to perform lifelong language learning (LLL) on a stream of different tasks without any performance degradation comparing to the multi-task counterparts.
2 code implementations • ACL 2020 • Chao-Wei Huang, Yun-Nung Chen
Pre-trained language models have achieved huge improvement on many NLP tasks.
1 code implementation • ACL 2020 • Shang-Yu Su, Chao-Wei Huang, Yun-Nung Chen
The prior work is the first attempt that utilized the duality between NLU and NLG to improve the performance via a dual supervised learning framework.
no code implementations • 28 Apr 2020 • Yau-Shian Wang, Hung-Yi Lee, Yun-Nung Chen
Also, the performance is on par with a recently proposed weakly-supervised text classification method.
no code implementations • IJCNLP 2019 • Ting-Yun Chang, Yun-Nung Chen
Contextualized word embeddings have boosted many NLP tasks compared with traditional static word embeddings.
no code implementations • WS 2019 • Shang-Chi Tsai, Ting-Yun Chang, Yun-Nung Chen
Clinical notes are essential medical documents to record each patient{'}s symptoms.
1 code implementation • WS 2019 • Alexander Te-Wei Shieh, Yung-Sung Chuang, Shang-Yu Su, Yun-Nung Chen
We first build a pointer-generator baseline model for conclusion generation.
1 code implementation • IJCNLP 2019 • Yi-Lin Tuan, Yun-Nung Chen, Hung-Yi Lee
This paper proposes a new task about how to apply dynamic knowledge graphs in neural conversation model and presents a novel TV series conversation corpus (DyKgChat) for the task.
1 code implementation • 24 Sep 2019 • Chao-Wei Huang, Yun-Nung Chen
Employing pre-trained language models (LM) to extract contextualized word representations has achieved state-of-the-art performance on various NLP tasks.
1 code implementation • 24 Sep 2019 • Ting-Rui Chiang, Hao-Tong Ye, Yun-Nung Chen
However, to best of our knowledge, two important questions for conversational comprehension research have not been well studied: 1) How well can the benchmark dataset reflect models' content understanding?
3 code implementations • IJCNLP 2019 • Yau-Shian Wang, Hung-Yi Lee, Yun-Nung Chen
This paper proposes Tree Transformer, which adds an extra constraint to attention heads of the bidirectional Transformer encoder in order to encourage the attention heads to follow tree structures.
1 code implementation • IJCNLP 2019 • Yi-Ting Yeh, Yun-Nung Chen
Standard accuracy metrics indicate that modern reading comprehension systems have achieved strong performance in many question answering datasets.
no code implementations • 22 Aug 2019 • Kuan-Yen Lin, Chao-Chun Hsu, Yun-Nung Chen, Lun-Wei Ku
After the entropy-enhanced DMN secures the video context, we apply an attention model that in-corporates summary and caption to generate an accurate answer given the question about the video.
no code implementations • 14 Aug 2019 • Yi-Ting Yeh, Tzu-Chuan Lin, Hsiao-Hua Cheng, Yu-Hsuan Deng, Shang-Yu Su, Yun-Nung Chen
Visual question answering and visual dialogue tasks have been increasingly studied in the multimodal field towards more practical real-world scenarios.
1 code implementation • WS 2019 • Yi-Ting Yeh, Yun-Nung Chen
Conversational machine comprehension requires deep understanding of the dialogue flow, and the prior work proposed FlowQA to implicitly model the context representations in reasoning for better understanding.
no code implementations • 24 May 2019 • Shang-Yu Su, Po-Wei Lin, Yun-Nung Chen
Spoken dialogue systems that assist users to solve complex tasks such as movie ticket booking have become an emerging research topic in artificial intelligence and natural language processing areas.
2 code implementations • ACL 2019 • Shang-Yu Su, Chao-Wei Huang, Yun-Nung Chen
Natural language understanding (NLU) and natural language generation (NLG) are both critical research topics in the NLP field.
1 code implementation • 16 Apr 2019 • Chia-Hsuan Lee, Yun-Nung Chen, Hung-Yi Lee
Spoken question answering (SQA) is challenging due to complex reasoning on top of the spoken documents.
Ranked #3 on Spoken Language Understanding on Spoken-SQuAD
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
no code implementations • 23 Mar 2019 • Hao-Tong Ye, Kai-Ling Lo, Shang-Yu Su, Yun-Nung Chen
End-to-end dialogue generation has achieved promising results without using handcrafted features and attributes specific for each task and corpus.
no code implementations • 21 Mar 2019 • Ting-Rui Chiang, Chao-Wei Huang, Shang-Yu Su, Yun-Nung Chen
With the increasing research interest in dialogue response generation, there is an emerging branch formulating this task as selecting next sentences, where given the partial dialogue contexts, the goal is to determine the most probable next sentence.
no code implementations • 21 Mar 2019 • Chao-Wei Huang, Ting-Rui Chiang, Shang-Yu Su, Yun-Nung Chen
The response selection has been an emerging research topic due to the growing interest in dialogue modeling, where the goal of the task is to select an appropriate response for continuing dialogues.
1 code implementation • NAACL 2019 • Ting-Rui Chiang, Yun-Nung Chen
Solving math word problems is a challenging task that requires accurate natural language understanding to bridge natural language texts and math expressions.
no code implementations • 31 Oct 2018 • Yu-An Wang, Yu-Kai Huang, Tzu-Chuan Lin, Shang-Yu Su, Yun-Nung Chen
Automatic melody generation has been a long-time aspiration for both AI researchers and musicians.
2 code implementations • 21 Oct 2018 • Ta-Chung Chi, Ching-Yen Shih, Yun-Nung Chen
This paper introduces the first dataset for evaluating English-Chinese Bilingual Contextual Word Similarity, namely BCWS (https://github. com/MiuLab/BCWS).
no code implementations • 27 Sep 2018 • Yau-Shian Wang, Yun-Nung Chen, Hung-Yi Lee
Learning discrete representations of data and then generating data from the discovered representations have been increasingly studied because the obtained discrete representations can benefit unsupervised learning.
1 code implementation • 19 Sep 2018 • Shang-Yu Su, Yun-Nung Chen
Natural language generation (NLG) is a critical component in spoken dialogue system, which can be divided into two phases: (1) sentence planning: deciding the overall sentence structure, (2) surface realization: determining specific word forms and flattening the sentence structure into a string.
1 code implementation • 15 Sep 2018 • Chih-Wen Goo, Yun-Nung Chen
Neural abstractive summarization has been increasingly studied, where the prior work mainly focused on summarizing single-speaker documents (news, scientific publications, etc).
Abstractive Dialogue Summarization Abstractive Text Summarization +1
1 code implementation • EMNLP 2018 • Ta-Chung Chi, Yun-Nung Chen
The model is evaluated on the Stanford Contextual Word Similarity (SCWS) dataset to ensure the quality of monolingual sense embeddings.
1 code implementation • 10 Sep 2018 • Ting-Yun Chang, Ta-Chung Chi, Shang-Chi Tsai, Yun-Nung Chen
This paper focuses on interpreting the embeddings for various aspects, including sense separation in the vector dimensions and definition generation.
no code implementations • 5 Sep 2018 • Shang-Yu Su, Pei-Chieh Yuan, Yun-Nung Chen
Spoken language understanding (SLU) is an essential component in conversational systems.
3 code implementations • EMNLP 2018 • Shang-Yu Su, Xiujun Li, Jianfeng Gao, Jingjing Liu, Yun-Nung Chen
This paper presents a Discriminative Deep Dyna-Q (D3Q) approach to improving the effectiveness and robustness of Deep Dyna-Q (DDQ), a recently proposed framework that extends the Dyna-Q algorithm to integrate planning for task-completion dialogue policy learning.
1 code implementation • NAACL 2018 • Shang-Yu Su, Kai-Ling Lo, Yi-Ting Yeh, Yun-Nung Chen
Natural language generation (NLG) is a critical component in spoken dialogue systems.
2 code implementations • NAACL 2018 • Chih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, Yun-Nung Chen
Attention-based recurrent neural network models for joint intent detection and slot filling have achieved the state-of-the-art performance, while they have independent attention weights.
Ranked #8 on Slot Filling on SNIPS
1 code implementation • NAACL 2018 • Shang-Yu Su, Pei-Chieh Yuan, Yun-Nung Chen
Spoken language understanding (SLU) is an essential component in conversational systems.
no code implementations • IJCNLP 2017 • Yun-Nung Chen, Jianfeng Gao
In the past decade, spoken dialogue systems have been the most prominent component in today{'}s personal assistants.
no code implementations • 31 Oct 2017 • Baolin Peng, Xiujun Li, Jianfeng Gao, Jingjing Liu, Yun-Nung Chen, Kam-Fai Wong
This paper presents a new method --- adversarial advantage actor-critic (Adversarial A2C), which significantly improves the efficiency of dialogue policy learning in task-completion dialogue systems.
1 code implementation • IJCNLP 2017 • Ta-Chung Chi, Po-Chun Chen, Shang-Yu Su, Yun-Nung Chen
Language understanding (LU) and dialogue policy learning are two essential components in conversational systems.
1 code implementation • 30 Sep 2017 • Po-Chun Chen, Ta-Chung Chi, Shang-Yu Su, Yun-Nung Chen
However, the previous model only paid attention to the content in history utterances without considering their temporal information and speaker roles.
no code implementations • 16 Sep 2017 • Bo-Ru Lu, Frank Shyu, Yun-Nung Chen, Hung-Yi Lee, Lin-shan Lee
Connectionist temporal classification (CTC) is a powerful approach for sequence-to-sequence learning, and has been popularly used in speech recognition.
1 code implementation • EMNLP 2017 • Guang-He Lee, Yun-Nung Chen
This paper proposes to address the word sense ambiguity issue in an unsupervised manner, where word sense representations are learned along a word sense selection mechanism given contexts.
1 code implementation • 12 Apr 2017 • Ting-Hao 'Kenneth' Huang, Yun-Nung Chen, Jeffrey P. Bigham
Output-agreement mechanisms such as ESP Game have been widely used in human computation to obtain reliable human-generated labels.
no code implementations • 21 Mar 2017 • Xiujun Li, Yun-Nung Chen, Lihong Li, Jianfeng Gao, Asli Celikyilmaz
Language understanding is a key component in a spoken dialogue system.
13 code implementations • IJCNLP 2017 • Xiujun Li, Yun-Nung Chen, Lihong Li, Jianfeng Gao, Asli Celikyilmaz
One of the major drawbacks of modularized task-completion dialogue systems is that each module is trained individually, which presents several challenges.
10 code implementations • 17 Dec 2016 • Xiujun Li, Zachary C. Lipton, Bhuwan Dhingra, Lihong Li, Jianfeng Gao, Yun-Nung Chen
Then, one can train reinforcement learning agents in an online fashion as they interact with the simulator.
1 code implementation • 3 Dec 2016 • Xuesong Yang, Yun-Nung Chen, Dilek Hakkani-Tur, Paul Crook, Xiujun Li, Jianfeng Gao, Li Deng
Natural language understanding and dialogue policy learning are both essential in conversational systems that predict the next system actions in response to a current user utterance.
no code implementations • 12 Sep 2016 • Yun-Nung Chen, Dilek Hakkani-Tur, Gokhan Tur, Asli Celikyilmaz, Jianfeng Gao, Li Deng
Natural language understanding (NLU) is a core component of a spoken dialogue system.
1 code implementation • ACL 2017 • Bhuwan Dhingra, Lihong Li, Xiujun Li, Jianfeng Gao, Yun-Nung Chen, Faisal Ahmed, Li Deng
In this paper, we address this limitation by replacing symbolic queries with an induced "soft" posterior distribution over the KB that indicates which entities the user is interested in.
no code implementations • LREC 2016 • Ming Sun, Yun-Nung Chen, Zhenhao Hua, Yulian Tamres-Rudnicky, Arnab Dash, Alex Rudnicky, er
Users will interact with an individual app on smart devices (e. g., phone, TV, car) to fulfill a specific goal (e. g. find a photographer), but users may also pursue more complex tasks that will span multiple domains and apps (e. g. plan a wedding ceremony).
no code implementations • LREC 2016 • Yun-Nung Chen, Dilek Hakkani-T{\"u}r
This paper presents an extended set of annotations for the ICSI meeting corpus with a goal of deeply understanding meeting conversations, where participant turns are annotated by actionable items that could be performed by an automated meeting assistant.