Search Results for author: Yunyi Yang

Found 8 papers, 7 papers with code

Constituency Lattice Encoding for Aspect Term Extraction

1 code implementation COLING 2020 Yunyi Yang, Kun Li, Xiaojun Quan, Weizhou Shen, Qinliang Su

One of the remaining challenges for aspect term extraction in sentiment analysis resides in the extraction of phrase-level aspect terms, which is non-trivial to determine the boundaries of such terms.

Aspect Term Extraction and Sentiment Classification Sentence +1

UBAR: Towards Fully End-to-End Task-Oriented Dialog Systems with GPT-2

1 code implementation7 Dec 2020 Yunyi Yang, Yunhao Li, Xiaojun Quan

This paper presents our task-oriented dialog system UBAR which models task-oriented dialogs on a dialog session level.

Language Modelling Response Generation

Directed Acyclic Graph Network for Conversational Emotion Recognition

1 code implementation ACL 2021 Weizhou Shen, Siyue Wu, Yunyi Yang, Xiaojun Quan

In this paper, we put forward a novel idea of encoding the utterances with a directed acyclic graph (DAG) to better model the intrinsic structure within a conversation, and design a directed acyclic neural network, namely DAG-ERC, to implement this idea.

Emotion Recognition in Conversation

Amendable Generation for Dialogue State Tracking

1 code implementation EMNLP (NLP4ConvAI) 2021 Xin Tian, Liankai Huang, Yingzhan Lin, Siqi Bao, Huang He, Yunyi Yang, Hua Wu, Fan Wang, Shuqi Sun

In this paper, we propose a novel Amendable Generation for Dialogue State Tracking (AG-DST), which contains a two-pass generation process: (1) generating a primitive dialogue state based on the dialogue of the current turn and the previous dialogue state, and (2) amending the primitive dialogue state from the first pass.

Dialogue State Tracking Multi-domain Dialogue State Tracking +1

UBARv2: Towards Mitigating Exposure Bias in Task-Oriented Dialogs

1 code implementation15 Sep 2022 Yunyi Yang, Hong Ding, Qingyi Liu, Xiaojun Quan

This paper studies the exposure bias problem in task-oriented dialog systems, where the model's generated content over multiple turns drives the dialog context away from the ground-truth distribution at training time, introducing error propagation and damaging the robustness of the TOD system.

Cannot find the paper you are looking for? You can Submit a new open access paper.