no code implementations • Findings (EMNLP) 2021 • Lei Shen, Jinchao Zhang, Jiao Ou, Xiaofang Zhao, Jie zhou
To address the above issues, we propose a dual-generative model, Dual-Emp, to simultaneously construct the emotional consensus and utilize some external unpaired data.
no code implementations • 18 May 2024 • Chengcheng Feng, Mu He, Qiuyu Tian, Haojie Yin, Xiaofang Zhao, Hongwei Tang, Xingqiang Wei
As deep learning technology continues to advance, image generation models, especially models like Stable Diffusion, are finding increasingly widespread application in visual arts creation.
no code implementations • 16 Sep 2021 • Lei Shen, Jinchao Zhang, Jiao Ou, Xiaofang Zhao, Jie zhou
To address the above issues, we propose a dual-generative model, Dual-Emp, to simultaneously construct the emotion consensus and utilize some external unpaired data.
no code implementations • 14 Sep 2021 • Lei Shen, Haolan Zhan, Xin Shen, Hongshen Chen, Xiaofang Zhao, Xiaodan Zhu
The training method updates parameters of a trained NCMs on two small sets with newly maintained and removed samples, respectively.
no code implementations • 13 Sep 2021 • Lei Shen, Haolan Zhan, Xin Shen, Yonghao Song, Xiaofang Zhao
Specifically, we obtain a group of images (PVIs) for each post based on a pre-trained word-image mapping model.
no code implementations • 26 Jun 2021 • Xu Yuan, Hongshen Chen, Yonghao Song, Xiaofang Zhao, Zhuoye Ding, Zhen He, Bo Long
In this paper, we propose a model, SSI, to improve sequential recommendation consistency with Self-Supervised Imitation.
no code implementations • 21 Oct 2020 • Hui Zhu, Xiaofang Zhao
Dropout regularization has been widely used in deep learning but performs less effective for convolutional neural networks since the spatially correlated features allow dropped information to still flow through the networks.
2 code implementations • Findings of the Association for Computational Linguistics 2020 • Hengyi Cai, Hongshen Chen, Yonghao Song, Zhuoye Ding, Yongjun Bao, Weipeng Yan, Xiaofang Zhao
Neural dialogue response generation has gained much popularity in recent years.
no code implementations • ACL 2020 • Hengyi Cai, Hongshen Chen, Yonghao Song, Cheng Zhang, Xiaofang Zhao, Dawei Yin
In this paper, we propose a data manipulation framework to proactively reshape the data distribution towards reliable samples by augmenting and highlighting effective learning samples as well as reducing the effect of inefficient samples simultaneously.
1 code implementation • 2 Mar 2020 • Hengyi Cai, Hongshen Chen, Cheng Zhang, Yonghao Song, Xiaofang Zhao, Yangxi Li, Dongsheng Duan, Dawei Yin
Current state-of-the-art neural dialogue systems are mainly data-driven and are trained on human-generated responses.
1 code implementation • IJCNLP 2019 • Hengyi Cai, Hongshen Chen, Cheng Zhang, Yonghao Song, Xiaofang Zhao, Dawei Yin
For each conversation, the model generates parameters of the encoder-decoder by referring to the input context.
no code implementations • 2 May 2018 • Hengyi Cai, Xingguang Ji, Yonghao Song, Yan Jin, Yang Zhang, Mairgup Mansur, Xiaofang Zhao
In contrast to previous work, KNPTC is able to integrate explicit knowledge into NMT for pinyin typo correction, and is able to learn to correct a variety of typos without the guidance of manually selected constraints or languagespecific features.