Search Results for author: Qingyang Wu

Found 23 papers, 10 papers with code

Surveying Attitudinal Alignment Between Large Language Models Vs. Humans Towards 17 Sustainable Development Goals

no code implementations22 Apr 2024 Qingyang Wu, Ying Xu, Tingsong Xiao, Yunze Xiao, Yitong Li, Tianyang Wang, Yichi Zhang, Shanghai Zhong, Yuwei Zhang, Wei Lu, Yifan Yang

This study conducts a comprehensive review and analysis of the existing literature on the attitudes of LLMs towards the 17 SDGs, emphasizing the comparison between their attitudes and support for each goal and those of humans.

Decision Making

kNN-ICL: Compositional Task-Oriented Parsing Generalization with Nearest Neighbor In-Context Learning

no code implementations17 Dec 2023 Wenting Zhao, Ye Liu, Yao Wan, Yibo Wang, Qingyang Wu, Zhongfen Deng, Jiangshu Du, Shuaiqi Liu, Yunlong Xu, Philip S. Yu

Task-Oriented Parsing (TOP) enables conversational assistants to interpret user commands expressed in natural language, transforming them into structured outputs that combine elements of both natural language and intent/slot tags.

In-Context Learning Prompt Engineering +1

Using Textual Interface to Align External Knowledge for End-to-End Task-Oriented Dialogue Systems

no code implementations23 May 2023 Qingyang Wu, Deema Alnuhait, Derek Chen, Zhou Yu

We demonstrate our paradigm in practice through MultiWOZ-Remake, including an interactive textual interface built for the MultiWOZ database and a correspondingly re-processed dataset.

Task-Oriented Dialogue Systems

Visual Instruction Tuning

9 code implementations NeurIPS 2023 Haotian Liu, Chunyuan Li, Qingyang Wu, Yong Jae Lee

Instruction tuning large language models (LLMs) using machine-generated instruction-following data has improved zero-shot capabilities on new tasks, but the idea is less explored in the multimodal field.

Video Question Answering visual instruction following +2

FaceChat: An Emotion-Aware Face-to-face Dialogue Framework

1 code implementation8 Mar 2023 Deema Alnuhait, Qingyang Wu, Zhou Yu

While current dialogue systems like ChatGPT have made significant advancements in text-based interactions, they often overlook the potential of other modalities in enhancing the overall user experience.

KRLS: Improving End-to-End Response Generation in Task Oriented Dialog with Reinforced Keywords Learning

1 code implementation30 Nov 2022 Xiao Yu, Qingyang Wu, Kun Qian, Zhou Yu

In task-oriented dialogs (TOD), reinforcement learning (RL) algorithms train a model to directly optimize response for task-related metrics.

Language Modelling reinforcement-learning +2

AU-Aware Vision Transformers for Biased Facial Expression Recognition

no code implementations12 Nov 2022 Shuyi Mao, Xinpeng Li, Qingyang Wu, Xiaojiang Peng

Studies have proven that domain bias and label bias exist in different Facial Expression Recognition (FER) datasets, making it hard to improve the performance of a specific dataset by adding other datasets.

Domain Adaptation Facial Expression Recognition +1

Stateful Memory-Augmented Transformers for Efficient Dialogue Modeling

1 code implementation15 Sep 2022 Qingyang Wu, Zhou Yu

Transformer encoder-decoder models have achieved great performance in dialogue generation tasks, however, their inability to process long dialogue history often leads to truncation of the context To address this problem, we propose a novel memory-augmented transformer that is compatible with existing pre-trained encoder-decoder models and enables efficient preservation of the dialogue history information.

Decoder Dialogue Generation +1

Video-based Smoky Vehicle Detection with A Coarse-to-Fine Framework

no code implementations8 Jul 2022 Xiaojiang Peng, Xiaomao Fan, Qingyang Wu, Jieyan Zhao, Pan Gao

Moreover, we present a new Coarse-to-fine Deep Smoky vehicle detection (CoDeS) framework for efficient smoky vehicle detection.

DG2: Data Augmentation Through Document Grounded Dialogue Generation

no code implementations SIGDIAL (ACL) 2022 Qingyang Wu, Song Feng, Derek Chen, Sachindra Joshi, Luis A. Lastras, Zhou Yu

Collecting data for training dialog systems can be extremely expensive due to the involvement of human participants and need for extensive annotation.

Data Augmentation Dialogue Generation

Perception Score, A Learned Metric for Open-ended Text Generation Evaluation

no code implementations7 Aug 2020 Jing Gu, Qingyang Wu, Zhou Yu

Automatic evaluation for open-ended natural language generation tasks remains a challenge.

Text Generation

A Tailored Pre-Training Model for Task-Oriented Dialog Generation

1 code implementation24 Apr 2020 Jing Gu, Qingyang Wu, Chongruo wu, Weiyan Shi, Zhou Yu

The recent success of large pre-trained language models such as BERT and GPT-2 has suggested the effectiveness of incorporating language priors in downstream dialog generation tasks.

Knowledge Distillation Language Modelling +1

TextGAIL: Generative Adversarial Imitation Learning for Text Generation

no code implementations7 Apr 2020 Qingyang Wu, Lei LI, Zhou Yu

Generative Adversarial Networks (GANs) for text generation have recently received many criticisms, as they perform worse than their MLE counterparts.

Conditional Text Generation Imitation Learning

Importance-Aware Learning for Neural Headline Editing

no code implementations25 Nov 2019 Qingyang Wu, Lei LI, Hao Zhou, Ying Zeng, Zhou Yu

We propose to automate this headline editing process through neural network models to provide more immediate writing support for these social media news writers.

Decoder Headline Generation

Alternating Recurrent Dialog Model with Large-scale Pre-trained Language Models

1 code implementation EACL 2021 Qingyang Wu, Yichi Zhang, Yu Li, Zhou Yu

Existing dialog system models require extensive human annotations and are difficult to generalize to different tasks.

Language Modelling Response Generation

Quantifying Intrinsic Uncertainty in Classification via Deep Dirichlet Mixture Networks

no code implementations11 Jun 2019 Qingyang Wu, He Li, Lexin Li, Zhou Yu

With the widespread success of deep neural networks in science and technology, it is becoming increasingly important to quantify the uncertainty of the predictions produced by deep learning.

Classification General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.