Search Results for author: Zehao Lin

Found 7 papers, 1 papers with code

Similar Scenes arouse Similar Emotions: Parallel Data Augmentation for Stylized Image Captioning

no code implementations26 Aug 2021 Guodun Li, Yuchen Zhai, Zehao Lin, Yin Zhang

Second, we construct the plugable multi-modal scene retriever to retrieve scenes represented with pairs of an image and its stylized caption, which are similar to the query image or caption in the large-scale factual data.

Data Augmentation Image Captioning

Dialogue State Tracking with Multi-Level Fusion of Predicted Dialogue States and Conversations

1 code implementation SIGDIAL (ACL) 2021 Jingyao Zhou, Haipang Wu, Zehao Lin, Guodun Li, Yin Zhang

Then the representation of each dialogue turn is aggregated by a hierarchical structure to form the passage information, which is utilized in the current turn of DST.

Dialogue State Tracking

Predict-then-Decide: A Predictive Approach for Wait or Answer Task in Dialogue Systems

no code implementations27 May 2020 Zehao Lin, Shaobo Cui, Guodun Li, Xiaoming Kang, Feng Ji, FengLin Li, Zhongzhou Zhao, Haiqing Chen, Yin Zhang

More specifically, we take advantage of a decision model to help the dialogue system decide whether to wait or answer.

MTSS: Learn from Multiple Domain Teachers and Become a Multi-domain Dialogue Expert

no code implementations21 May 2020 Shuke Peng, Feng Ji, Zehao Lin, Shaobo Cui, Haiqing Chen, Yin Zhang

How to build a high-quality multi-domain dialogue system is a challenging work due to its complicated and entangled dialogue state space among each domain, which seriously limits the quality of dialogue policy, and further affects the generated response.

Task-Oriented Conversation Generation Using Heterogeneous Memory Networks

no code implementations IJCNLP 2019 Zehao Lin, Xinjing Huang, Feng Ji, Haiqing Chen, Ying Zhang

How to incorporate external knowledge into a neural dialogue model is critically important for dialogue systems to behave like real humans.

Cannot find the paper you are looking for? You can Submit a new open access paper.