no code implementations • COLING 2022 • Qiong Nan, Danding Wang, Yongchun Zhu, Qiang Sheng, Yuhui Shi, Juan Cao, Jintao Li
To address this issue, we propose a Domain- and Instance-level Transfer Framework for Fake News Detection (DITFEND), which could improve the performance of specific target domains.
no code implementations • 30 Jun 2022 • Shuokai Li, Yongchun Zhu, Ruobing Xie, Zhenwei Tang, Zhao Zhang, Fuzhen Zhuang, Qing He, Hui Xiong
In this paper, we propose two key points for CRS to improve the user experience: (1) Speaking like a human, human can speak with different styles according to the current dialogue context.
1 code implementation • 26 Jun 2022 • Yongchun Zhu, Qiang Sheng, Juan Cao, Qiong Nan, Kai Shu, Minghui Wu, Jindong Wang, Fuzhen Zhuang
In this paper, we propose a Memory-guided Multi-view Multi-domain Fake News Detection Framework (M$^3$FEND) to address these two challenges.
no code implementations • 19 May 2022 • Yiqing Wu, Ruobing Xie, Yongchun Zhu, Fuzhen Zhuang, Xu Zhang, Leyu Lin, Qing He
Specifically, we build the personalized soft prefix prompt via a prompt generator based on user profiles and enable a sufficient training of prompts via a prompt-oriented contrastive learning with both prompt- and behavior-based augmentations.
1 code implementation • 10 May 2022 • Yiqing Wu, Ruobing Xie, Yongchun Zhu, Fuzhen Zhuang, Xiang Ao, Xu Zhang, Leyu Lin, Qing He
In this work, we define the selective fairness task, where users can flexibly choose which sensitive attributes should the recommendation model be bias-free.
1 code implementation • 2 May 2022 • Zhenwei Tang, Shichao Pei, Zhao Zhang, Yongchun Zhu, Fuzhen Zhuang, Robert Hoehndorf, Xiangliang Zhang
Most real-world knowledge graphs (KG) are far from complete and comprehensive.
1 code implementation • 20 Apr 2022 • Shuokai Li, Ruobing Xie, Yongchun Zhu, Xiang Ao, Fuzhen Zhuang, Qing He
In this work, we highlight that the user's historical dialogue sessions and look-alike users are essential sources of user preferences besides the current dialogue session in CRS.
Ranked #3 on
Recommendation Systems
on ReDial
1 code implementation • 20 Apr 2022 • Yongchun Zhu, Qiang Sheng, Juan Cao, Shuokai Li, Danding Wang, Fuzhen Zhuang
In this paper, we propose an entity debiasing framework (\textbf{ENDEF}) which generalizes fake news detection models to the future data by mitigating entity bias from a cause-effect perspective.
1 code implementation • ACL 2022 • Qiang Sheng, Juan Cao, Xueyao Zhang, Rundong Li, Danding Wang, Yongchun Zhu
To differentiate fake news from real ones, existing methods observe the language patterns of the news post and "zoom in" to verify its content with knowledge sources or check its readers' replies.
1 code implementation • 20 Mar 2022 • Yiqing Wu, Ruobing Xie, Yongchun Zhu, Xiang Ao, Xin Chen, Xu Zhang, Fuzhen Zhuang, Leyu Lin, Qing He
We argue that MBR models should: (1) model the coarse-grained commonalities between different behaviors of a user, (2) consider both individual sequence view and global graph view in multi-behavior modeling, and (3) capture the fine-grained differences between multiple behaviors of a user.
no code implementations • 4 Jan 2022 • Yongchun Zhu, Dongbo Xi, Bowen Song, Fuzhen Zhuang, Shuai Chen, Xi Gu, Qing He
Thus, in this paper, we further propose a transfer framework to tackle the cross-domain fraud detection problem, which aims to transfer knowledge from existing domains (source domains) with enough and mature data to improve the performance in the new domain (target domain).
1 code implementation • 4 Jan 2022 • Qiong Nan, Juan Cao, Yongchun Zhu, Yanyan Wang, Jintao Li
In this paper, we first design a benchmark of fake news dataset for MFND with domain label annotated, namely Weibo21, which consists of 4, 488 fake news and 4, 640 real news from 9 different domains.
1 code implementation • 4 Jan 2022 • Yongchun Zhu, Fuzhen Zhuang, Jindong Wang, Jingwu Chen, Zhiping Shi, Wenjuan Wu, Qing He
Based on this, we present Multi-Representation Adaptation Network (MRAN) to accomplish the cross-domain image classification task via multi-representation alignment which can capture the information from different aspects.
1 code implementation • 4 Jan 2022 • Yongchun Zhu, Fuzhen Zhuang, Deqing Wang
However, in the practical scenario, labeled data can be typically collected from multiple diverse sources, and they might be different not only from the target domain but also from each other.
Image Classification
Multi-Source Unsupervised Domain Adaptation
+1
no code implementations • 31 Dec 2021 • Dongbo Xi, Fuzhen Zhuang, Bowen Song, Yongchun Zhu, Shuai Chen, Dan Hong, Tao Chen, Xi Gu, Qing He
Many prediction tasks of real-world applications need to model multi-order feature interactions in user's event sequence for better detection performance.
1 code implementation • 21 Oct 2021 • Yongchun Zhu, Zhenwei Tang, Yudan Liu, Fuzhen Zhuang, Ruobing Xie, Xu Zhang, Leyu Lin, Qing He
Specifically, a meta network fed with users' characteristic embeddings is learned to generate personalized bridge functions to achieve personalized transfer of preferences for each user.
1 code implementation • 17 Jun 2021 • Yongchun Zhu, Fuzhen Zhuang, Jindong Wang, Guolin Ke, Jingwu Chen, Jiang Bian, Hui Xiong, Qing He
The adaptation can be achieved easily with most feed-forward network models by extending them with LMMD loss, which can be trained efficiently via back-propagation.
3 code implementations • 31 May 2021 • Yongchun Zhu, Yudan Liu, Ruobing Xie, Fuzhen Zhuang, Xiaobo Hao, Kaikai Ge, Xu Zhang, Leyu Lin, Juan Cao
Besides, MetaHeac has been successfully deployed in WeChat for the promotion of both contents and advertisements, leading to great improvement in the quality of marketing.
3 code implementations • 18 May 2021 • Dongbo Xi, Zhen Chen, Peng Yan, Yinger Zhang, Yongchun Zhu, Fuzhen Zhuang, Yu Chen
While considerable multi-task efforts have been made in this direction, a long-standing challenge is how to explicitly model the long-path sequential dependence among audience multi-step conversions for improving the end-to-end conversion.
no code implementations • 11 May 2021 • Yongchun Zhu, Kaikai Ge, Fuzhen Zhuang, Ruobing Xie, Dongbo Xi, Xu Zhang, Leyu Lin, Qing He
With the advantage of meta learning which has good generalization ability to novel tasks, we propose a transfer-meta framework for CDR (TMCDR) which has a transfer stage and a meta stage.
no code implementations • 11 May 2021 • Yongchun Zhu, Ruobing Xie, Fuzhen Zhuang, Kaikai Ge, Ying Sun, Xu Zhang, Leyu Lin, Juan Cao
The cold item ID embedding has two main problems: (1) A gap is existing between the cold ID embedding and the deep model.
no code implementations • 27 Jan 2021 • Yongchun Zhu, Fuzhen Zhuang, Xiangliang Zhang, Zhiyuan Qi, Zhiping Shi, Juan Cao, Qing He
However, in real-world applications, few-shot learning paradigm often suffers from data shift, i. e., samples in different tasks, even in the same task, could be drawn from various data distributions.
no code implementations • 8 Aug 2020 • Dongbo Xi, Bowen Song, Fuzhen Zhuang, Yongchun Zhu, Shuai Chen, Tianyi Zhang, Yuan Qi, Qing He
In this paper, we propose the Dual Importance-aware Factorization Machines (DIFM), which exploits the internal field information among users' behavior sequence from dual perspectives, i. e., field value variations and field interactions simultaneously for fraud detection.
no code implementations • 12 Jul 2020 • Dongbo Xi, Fuzhen Zhuang, Yongchun Zhu, Pengpeng Zhao, Xiangliang Zhang, Qing He
In this paper, we propose a Graph Factorization Machine (GFM) which utilizes the popular Factorization Machine to aggregate multi-order interactions from neighborhood for recommendation.
2 code implementations • 20 Nov 2019 • Fuzhen Zhuang, Keyu Duan, Tongjia Guo, Yongchun Zhu, Dongbo Xi, Zhiyuan Qi, Qing He
The transfer learning toolkit wraps the codes of 17 transfer learning models and provides integrated interfaces, allowing users to use those models by calling a simple function.
3 code implementations • 7 Nov 2019 • Fuzhen Zhuang, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, HengShu Zhu, Hui Xiong, Qing He
In order to show the performance of different transfer learning models, over twenty representative transfer learning models are used for experiments.