Search Results for author: Qingyang Wu

Found 15 papers, 4 papers with code

Reinforced Language Modeling for End-to-End Task Oriented Dialog

no code implementations30 Nov 2022 Xiao Yu, Qingyang Wu, Kun Qian, Zhou Yu

In task-oriented dialogs such as MultiWoZ (Budzianowski et al., 2018), an informative and/or successful system response needs to include necessary key information such as the phone number of a hotel.

Language Modelling reinforcement-learning

AU-Aware Vision Transformers for Biased Facial Expression Recognition

no code implementations12 Nov 2022 Shuyi Mao, Xinpeng Li, Qingyang Wu, Xiaojiang Peng

Studies have proven that domain bias and label bias exist in different Facial Expression Recognition (FER) datasets, making it hard to improve the performance of a specific dataset by adding other datasets.

Domain Adaptation Facial Expression Recognition

Stateful Memory-Augmented Transformers for Dialogue Modeling

no code implementations15 Sep 2022 Qingyang Wu, Zhou Yu

Transformer encoder-decoder models have shown impressive performance in dialogue modeling.

Language Modelling

Video-based Smoky Vehicle Detection with A Coarse-to-Fine Framework

no code implementations8 Jul 2022 Xiaojiang Peng, Xiaomao Fan, Qingyang Wu, Jieyan Zhao, Pan Gao

Moreover, we present a new Coarse-to-fine Deep Smoky vehicle detection (CoDeS) framework for efficient smoky vehicle detection.

DG2: Data Augmentation Through Document Grounded Dialogue Generation

no code implementations SIGDIAL (ACL) 2022 Qingyang Wu, Song Feng, Derek Chen, Sachindra Joshi, Luis A. Lastras, Zhou Yu

Collecting data for training dialog systems can be extremely expensive due to the involvement of human participants and need for extensive annotation.

Data Augmentation Dialogue Generation

Perception Score, A Learned Metric for Open-ended Text Generation Evaluation

no code implementations7 Aug 2020 Jing Gu, Qingyang Wu, Zhou Yu

Automatic evaluation for open-ended natural language generation tasks remains a challenge.

Text Generation

A Tailored Pre-Training Model for Task-Oriented Dialog Generation

1 code implementation24 Apr 2020 Jing Gu, Qingyang Wu, Chongruo wu, Weiyan Shi, Zhou Yu

The recent success of large pre-trained language models such as BERT and GPT-2 has suggested the effectiveness of incorporating language priors in downstream dialog generation tasks.

Knowledge Distillation Language Modelling

TextGAIL: Generative Adversarial Imitation Learning for Text Generation

no code implementations7 Apr 2020 Qingyang Wu, Lei LI, Zhou Yu

Generative Adversarial Networks (GANs) for text generation have recently received many criticisms, as they perform worse than their MLE counterparts.

Conditional Text Generation Imitation Learning

Importance-Aware Learning for Neural Headline Editing

no code implementations25 Nov 2019 Qingyang Wu, Lei LI, Hao Zhou, Ying Zeng, Zhou Yu

We propose to automate this headline editing process through neural network models to provide more immediate writing support for these social media news writers.

Headline generation

Alternating Recurrent Dialog Model with Large-scale Pre-trained Language Models

1 code implementation EACL 2021 Qingyang Wu, Yichi Zhang, Yu Li, Zhou Yu

Existing dialog system models require extensive human annotations and are difficult to generalize to different tasks.

Language Modelling Response Generation

Quantifying Intrinsic Uncertainty in Classification via Deep Dirichlet Mixture Networks

no code implementations11 Jun 2019 Qingyang Wu, He Li, Lexin Li, Zhou Yu

With the widespread success of deep neural networks in science and technology, it is becoming increasingly important to quantify the uncertainty of the predictions produced by deep learning.

Classification General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.