no code implementations • 11 Jun 2019 • Qingyang Wu, He Li, Lexin Li, Zhou Yu
With the widespread success of deep neural networks in science and technology, it is becoming increasingly important to quantify the uncertainty of the predictions produced by deep learning.
1 code implementation • EACL 2021 • Qingyang Wu, Yichi Zhang, Yu Li, Zhou Yu
Existing dialog system models require extensive human annotations and are difficult to generalize to different tasks.
no code implementations • 25 Nov 2019 • Qingyang Wu, Lei LI, Hao Zhou, Ying Zeng, Zhou Yu
We propose to automate this headline editing process through neural network models to provide more immediate writing support for these social media news writers.
no code implementations • 7 Apr 2020 • Qingyang Wu, Lei LI, Zhou Yu
Generative Adversarial Networks (GANs) for text generation have recently received many criticisms, as they perform worse than their MLE counterparts.
1 code implementation • 24 Apr 2020 • Jing Gu, Qingyang Wu, Chongruo wu, Weiyan Shi, Zhou Yu
The recent success of large pre-trained language models such as BERT and GPT-2 has suggested the effectiveness of incorporating language priors in downstream dialog generation tasks.
1 code implementation • 11 May 2020 • Wenmian Yang, Guangtao Zeng, Bowen Tan, Zeqian Ju, Subrato Chakravorty, Xuehai He, Shu Chen, Xingyi Yang, Qingyang Wu, Zhou Yu, Eric Xing, Pengtao Xie
On these two datasets, we train several dialogue generation models based on Transformer, GPT, and BERT-GPT.
no code implementations • 7 Aug 2020 • Jing Gu, Qingyang Wu, Zhou Yu
Automatic evaluation for open-ended natural language generation tasks remains a challenge.
no code implementations • 14 Oct 2020 • Qingyang Wu, Zhenzhong Lan, Kun Qian, Jing Gu, Alborz Geramifard, Zhou Yu
Transformers have reached remarkable success in sequence modeling.
no code implementations • ACL 2021 • Jing Gu, Qingyang Wu, Chongruo wu, Weiyan Shi, Zhou Yu
However, the performance of pre-trained models on task-oriented dialog tasks is still under-explored.
1 code implementation • ACL 2021 • Meng Zhou, Zechen Li, Bowen Tan, Guangtao Zeng, Wenmian Yang, Xuehai He, Zeqian Ju, Subrato Chakravorty, Shu Chen, Xingyi Yang, Yichen Zhang, Qingyang Wu, Zhou Yu, Kun Xu, Eric Xing, Pengtao Xie
Training complex dialog generation models on small datasets bears high risk of overfitting.
no code implementations • SIGDIAL (ACL) 2022 • Qingyang Wu, Song Feng, Derek Chen, Sachindra Joshi, Luis A. Lastras, Zhou Yu
Collecting data for training dialog systems can be extremely expensive due to the involvement of human participants and need for extensive annotation.
no code implementations • 8 Jul 2022 • Xiaojiang Peng, Xiaomao Fan, Qingyang Wu, Jieyan Zhao, Pan Gao
Moreover, we present a new Coarse-to-fine Deep Smoky vehicle detection (CoDeS) framework for efficient smoky vehicle detection.
1 code implementation • 15 Sep 2022 • Qingyang Wu, Zhou Yu
Transformer encoder-decoder models have achieved great performance in dialogue generation tasks, however, their inability to process long dialogue history often leads to truncation of the context To address this problem, we propose a novel memory-augmented transformer that is compatible with existing pre-trained encoder-decoder models and enables efficient preservation of the dialogue history information.
no code implementations • 12 Nov 2022 • Shuyi Mao, Xinpeng Li, Qingyang Wu, Xiaojiang Peng
Studies have proven that domain bias and label bias exist in different Facial Expression Recognition (FER) datasets, making it hard to improve the performance of a specific dataset by adding other datasets.
1 code implementation • 30 Nov 2022 • Xiao Yu, Qingyang Wu, Kun Qian, Zhou Yu
In task-oriented dialogs (TOD), reinforcement learning (RL) algorithms train a model to directly optimize response for task-related metrics.
1 code implementation • CVPR 2023 • Yuheng Li, Haotian Liu, Qingyang Wu, Fangzhou Mu, Jianwei Yang, Jianfeng Gao, Chunyuan Li, Yong Jae Lee
Large-scale text-to-image diffusion models have made amazing advances.
Ranked #4 on Conditional Text-to-Image Synthesis on COCO-MIG
1 code implementation • 8 Mar 2023 • Deema Alnuhait, Qingyang Wu, Zhou Yu
While current dialogue systems like ChatGPT have made significant advancements in text-based interactions, they often overlook the potential of other modalities in enhancing the overall user experience.
1 code implementation • ICCV 2023 • Ran Gong, Jiangyong Huang, Yizhou Zhao, Haoran Geng, Xiaofeng Gao, Qingyang Wu, Wensi Ai, Ziheng Zhou, Demetri Terzopoulos, Song-Chun Zhu, Baoxiong Jia, Siyuan Huang
To tackle these challenges, we present ARNOLD, a benchmark that evaluates language-grounded task learning with continuous states in realistic 3D scenes.
9 code implementations • NeurIPS 2023 • Haotian Liu, Chunyuan Li, Qingyang Wu, Yong Jae Lee
Instruction tuning large language models (LLMs) using machine-generated instruction-following data has improved zero-shot capabilities on new tasks, but the idea is less explored in the multimodal field.
Ranked #4 on Visual Question Answering on BenchLMM
no code implementations • 23 May 2023 • Qingyang Wu, Deema Alnuhait, Derek Chen, Zhou Yu
We demonstrate our paradigm in practice through MultiWOZ-Remake, including an interactive textual interface built for the MultiWOZ database and a correspondingly re-processed dataset.
no code implementations • 1 Aug 2023 • Qingyang Wu, James Gung, Raphael Shu, Yi Zhang
Dialogue act annotations are important to improve response generation quality in task-oriented dialogue systems.
no code implementations • 17 Dec 2023 • Wenting Zhao, Ye Liu, Yao Wan, Yibo Wang, Qingyang Wu, Zhongfen Deng, Jiangshu Du, Shuaiqi Liu, Yunlong Xu, Philip S. Yu
Task-Oriented Parsing (TOP) enables conversational assistants to interpret user commands expressed in natural language, transforming them into structured outputs that combine elements of both natural language and intent/slot tags.
no code implementations • 22 Apr 2024 • Qingyang Wu, Ying Xu, Tingsong Xiao, Yunze Xiao, Yitong Li, Tianyang Wang, Yichi Zhang, Shanghai Zhong, Yuwei Zhang, Wei Lu, Yifan Yang
This study conducts a comprehensive review and analysis of the existing literature on the attitudes of LLMs towards the 17 SDGs, emphasizing the comparison between their attitudes and support for each goal and those of humans.