no code implementations • COLING 2022 • Jinfeng Zhou, Bo wang, Zhitong Yang, Dongming Zhao, Kun Huang, Ruifang He, Yuexian Hou
In CRS, implicit patterns of user interest sequence guide the smooth transition of dialog utterances to the goal item.
no code implementations • COLING 2022 • Zhitong Yang, Bo wang, Jinfeng Zhou, Yue Tan, Dongming Zhao, Kun Huang, Ruifang He, Yuexian Hou
We design a global reinforcement learning with the planned paths to flexibly adjust the local response generation model towards the global target.
no code implementations • EMNLP 2021 • Jinfeng Zhou, Bo wang, Ruifang He, Yuexian Hou
Although paths of user interests shift in knowledge graphs (KGs) can benefit conversational recommender systems (CRS), explicit reasoning on KGs has not been well considered in CRS, due to the complex of high-order and incomplete paths.
Ranked #2 on Text Generation on ReDial
no code implementations • 24 Aug 2023 • Yachao Zhao, Bo wang, Dongming Zhao, Kun Huang, Yan Wang, Ruifang He, Yuexian Hou
We propose that this re-judge inconsistency can be similar to the inconsistency between human's unaware implicit social bias and their aware explicit social bias.
no code implementations • 17 Jul 2023 • Yang Liu, Yuexian Hou
The experimental results demonstrate that the proposed method can bring significant improvements in BLEU scores on two datasets.
1 code implementation • 19 May 2023 • Yihong Tang, Bo wang, Miao Fang, Dongming Zhao, Kun Huang, Ruifang He, Yuexian Hou
We design a Contrastive Latent Variable-based model (CLV) that clusters the dense persona descriptions into sparse categories, which are combined with the history query to generate personalized responses.
no code implementations • 12 May 2023 • Yang Liu, Yuexian Hou
In this paper, the log-likelihoods of stereotype bias and anti-stereotype bias samples output by MLMs are considered Gaussian distributions.
no code implementations • 23 Feb 2023 • Yushan Qian, Bo wang, Ting-En Lin, Yinhe Zheng, Ying Zhu, Dongming Zhao, Yuexian Hou, Yuchuan Wu, Yongbin Li
Empathetic dialogue is a human-like behavior that requires the perception of both affective factors (e. g., emotion status) and cognitive factors (e. g., cause of the emotion).
1 code implementation • 7 Feb 2023 • Yang Liu, Yuexian Hou
Humor recognition has been extensively studied with different methods in the past years.
no code implementations • 12 Jan 2023 • Yushan Qian, Bo wang, Shangzhao Ma, Wu Bin, Shuo Zhang, Dongming Zhao, Kun Huang, Yuexian Hou
Towards human-like dialogue systems, current emotional dialogue approaches jointly model emotion and semantics with a unified neural network.
no code implementations • 5 Nov 2022 • Jinfeng Zhou, Bo wang, Minlie Huang, Dongming Zhao, Kun Huang, Ruifang He, Yuexian Hou
Human conversations of recommendation naturally involve the shift of interests which can align the recommendation actions and conversation process to make accurate recommendations with rich explanations.
no code implementations • 19 Sep 2020 • Chenguang Zhang, Yuexian Hou, Dawei Song, Liangzhu Ge, Yaoshuai Yao
In this paper, from an information theoretic perspective, we introduce a new definition of redundancy to describe the diversity of hidden units under supervised learning settings by formalizing the effect of hidden layers on the generalization capacity as the mutual information.
no code implementations • 25 Sep 2019 • Xindian Ma, Peng Zhang, Xiaoliu Mao, Yehua Zhang, Nan Duan, Yuexian Hou, Ming Zhou.
Then, we show that the lower bound of such a separation rank can reveal the quantitative relation between the network structure (e. g. depth/width) and the modeling ability for the contextual dependency.
1 code implementation • NeurIPS 2019 • Xindian Ma, Peng Zhang, Shuai Zhang, Nan Duan, Yuexian Hou, Dawei Song, Ming Zhou
In this paper, based on the ideas of tensor decomposition and parameters sharing, we propose a novel self-attention model (namely Multi-linear attention) with Block-Term Tensor Decomposition (BTD).
1 code implementation • COLING 2018 • Shuqin Gu, Lipeng Zhang, Yuexian Hou, Yin Song
PBAN not only concentrates on the position information of aspect terms, but also mutually models the relation between aspect term and sentence by employing bidirectional attention mechanism.
Aspect-Based Sentiment Analysis (ABSA) Feature Engineering +2
no code implementations • 5 Feb 2015 • Xiaozhao Zhao, Yuexian Hou, Dawei Song, Wenjie Li
We then revisit Boltzmann machines (BM) from a model selection perspective and theoretically show that both the fully visible BM (VBM) and the BM with hidden units can be derived from the general binary multivariate distribution using the CIF principle.
no code implementations • 16 Feb 2013 • Xiaozhao Zhao, Yuexian Hou, Qian Yu, Dawei Song, Wenjie Li
Typical dimensionality reduction methods focus on directly reducing the number of random variables while retaining maximal variations in the data.