1 code implementation • EMNLP 2021 • Shitao Xiao, Zheng Liu, Yingxia Shao, Defu Lian, Xing Xie
In this work, we propose the Matching-oriented Product Quantization (MoPQ), where a novel objective Multinoulli Contrastive Loss (MCL) is formulated.
no code implementations • 18 Nov 2023 • Yuxuan Lei, Jianxun Lian, Jing Yao, Xu Huang, Defu Lian, Xing Xie
Behavior alignment operates in the language space, representing user preferences and item information as text to learn the recommendation model's behavior; intention alignment works in the latent space of the recommendation model, using user and item representations to understand the model's behavior; hybrid alignment combines both language and latent spaces for alignment training.
no code implementations • 15 Nov 2023 • Qi Liu, Xuyang Hou, Haoran Jin, Jin Chen, Zhe Wang, Defu Lian, Tan Qu, Jia Cheng, Jun Lei
The insights from this subset reveal the user's decision-making process related to the candidate item, improving prediction accuracy.
1 code implementation • 10 Nov 2023 • Kun Yi, Qi Zhang, Wei Fan, Shoujin Wang, Pengyang Wang, Hui He, Defu Lian, Ning An, Longbing Cao, Zhendong Niu
FreTS mainly involves two stages, (i) Domain Conversion, that transforms time-domain signals into complex numbers of frequency domain; (ii) Frequency Learning, that performs our redesigned MLPs for the learning of real and imaginary part of frequency components.
1 code implementation • 6 Nov 2023 • Mingjia Yin, Hao Wang, Xiang Xu, Likang Wu, Sirui Zhao, Wei Guo, Yong liu, Ruiming Tang, Defu Lian, Enhong Chen
To this end, we propose a graph-driven framework, named Adaptive and Personalized Graph Learning for Sequential Recommendation (APGL4SR), that incorporates adaptive and personalized global collaborative information into sequential recommendation systems.
no code implementations • 1 Nov 2023 • Shi Yin, Shijie Huan, Defu Lian, Shangfei Wang, Jinshui Hu, Tao Guo, Bing Yin, BaoCai Yin, Cong Liu
For temporal modeling, we propose a recurrent token mixing mechanism, an axis-landmark-positional embedding mechanism, as well as a confidence-enhanced multi-head attention mechanism to adaptively and robustly embed long-term landmark dynamics into their 1D representations; for structure modeling, we design intra-group and inter-group structure modeling mechanisms to encode the component-level as well as global-level facial structure patterns as a refinement for the 1D representations of landmarks through token communications in the spatial dimension via 1D convolutional layers.
no code implementations • 20 Oct 2023 • Xu Huang, Jianxun Lian, Hao Wang, Defu Lian, Xing Xie
Recommendation systems effectively guide users in locating their desired information within extensive content repositories.
1 code implementation • 19 Oct 2023 • Gangwei Jiang, Caigao Jiang, Siqiao Xue, James Y. Zhang, Jun Zhou, Defu Lian, Ying WEI
In this work, we first investigate such anytime fine-tuning effectiveness of existing continual pre-training approaches, concluding with unanimously decreased performance on unseen domains.
no code implementations • 9 Oct 2023 • Zheli Xiong, Defu Lian, Enhong Chen, Gang Chen, Xiaomin Cheng
To alleviate this problem, some researchers incorporate a prior OD matrix as a target in the regression to provide more structural constraints.
no code implementations • 29 Sep 2023 • Yichang Xu, Chenwang Wu, Defu Lian
Recommender systems have been shown to be vulnerable to poisoning attacks, where malicious data is injected into the dataset to cause the recommender system to provide biased recommendations.
1 code implementation • 26 Sep 2023 • Zhihao Shi, Jie Wang, Fanghua Lu, Hanzhu Chen, Defu Lian, Zheng Wang, Jieping Ye, Feng Wu
The inverse mapping leads to an objective function that is equivalent to that by the joint training, while it can effectively incorporate GNNs in the training phase of NEs against the learning bias.
Ranked #1 on
Node Property Prediction
on ogbn-proteins
no code implementations • 4 Sep 2023 • Jin Zhang, Defu Lian, Hong Xie, Yawen Li, Enhong Chen
Furthermore, we employ Bayesian meta-learning methods to effectively address the cold-start problem and derive theoretical regret bounds for our proposed method, ensuring a robust performance guarantee.
no code implementations • 31 Aug 2023 • Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
We introduce an efficient framework called InteRecAgent, which employs LLMs as the brain and recommender models as tools.
no code implementations • 15 Aug 2023 • Likang Wu, Junji Jiang, Hongke Zhao, Hao Wang, Defu Lian, Mengdi Zhang, Enhong Chen
However, the multi-faceted semantic orientation in the feature-semantic alignment has been neglected by previous work, i. e. the content of a node usually covers diverse topics that are relevant to the semantics of multiple labels.
no code implementations • 11 Aug 2023 • Qi Liu, Zhilong Zhou, Gangwei Jiang, Tiezheng Ge, Defu Lian
In this paper, we focus on the bottom representation learning of MTL in RS and propose the Deep Task-specific Bottom Representation Network (DTRN) to alleviate the negative transfer problem.
no code implementations • 31 Jul 2023 • Jin Chen, Zheng Liu, Xu Huang, Chenwang Wu, Qi Liu, Gangwei Jiang, Yuanhao Pu, Yuxuan Lei, Xiaolong Chen, Xingmei Wang, Defu Lian, Enhong Chen
The advent of large language models marks a revolutionary breakthrough in artificial intelligence.
no code implementations • 11 Jul 2023 • Zheli Xiong, Defu Lian, Enhong Chen, Gang Chen, Xiaomin Cheng
To this end, this paper proposes an integrated method, which uses deep learning methods to infer the structure of OD sequence and uses structural constraints to guide traditional numerical optimization.
1 code implementation • 5 Jun 2023 • Zhaoyi Li, Ying WEI, Defu Lian
Despite the rising prevalence of neural sequence models, recent empirical evidences suggest their deficiency in compositional generalization.
1 code implementation • 14 Mar 2023 • Moritz Neun, Christian Eichenberger, Henry Martin, Markus Spanring, Rahul Siripurapu, Daniel Springer, Leyan Deng, Chenwang Wu, Defu Lian, Min Zhou, Martin Lumiste, Andrei Ilie, Xinhua Wu, Cheng Lyu, Qing-Long Lu, Vishal Mahajan, Yichao Lu, Jiezhang Li, Junjun Li, Yue-Jiao Gong, Florian Grötschla, Joël Mathys, Ye Wei, He Haitao, Hui Fang, Kevin Malm, Fei Tang, Michael Kopp, David Kreil, Sepp Hochreiter
We only provide vehicle count data from spatially sparse stationary vehicle detectors in these three cities as model input for this task.
no code implementations • 1 Mar 2023 • Yongqiang Han, Likang Wu, Hao Wang, Guifeng Wang, Mengdi Zhang, Zhi Li, Defu Lian, Enhong Chen
Sequential Recommendation is a widely studied paradigm for learning users' dynamic interests from historical interactions for predicting the next potential item.
1 code implementation • 13 Feb 2023 • Lei Chen, Le Wu, Kun Zhang, Richang Hong, Defu Lian, Zhiqiang Zhang, Jun Zhou, Meng Wang
We augment imbalanced training data towards balanced data distribution to improve fairness.
no code implementations • 15 Nov 2022 • Zhihao Zhu, Chenwang Wu, Min Zhou, Hao Liao, Defu Lian, Enhong Chen
Recent studies show that Graph Neural Networks(GNNs) are vulnerable and easily fooled by small perturbations, which has raised considerable concerns for adapting GNNs in various safety-critical applications.
2 code implementations • 30 Oct 2022 • Leyan Deng, Chenwang Wu, Defu Lian, Min Zhou
In this technical report, we present our solutions to the Traffic4cast 2022 core challenge and extended challenge.
no code implementations • 25 Oct 2022 • Qingyang Wang, Defu Lian, Chenwang Wu, Enhong Chen
Notably, TCD adds pseudo label data instead of deleting abnormal data, which avoids the cleaning of normal data, and the cooperative training of the three models is also beneficial to model generalization.
1 code implementation • 28 Jun 2022 • Xu Huang, Defu Lian, Jin Chen, Zheng Liu, Xing Xie, Enhong Chen
Deep recommender systems (DRS) are intensively applied in modern web services.
1 code implementation • 17 Jun 2022 • Chenwang Wu, Defu Lian, Yong Ge, Min Zhou, Enhong Chen, DaCheng Tao
Second, considering that MixFM may generate redundant or even detrimental instances, we further put forward a novel Factorization Machine powered by Saliency-guided Mixup (denoted as SMFM).
no code implementations • 30 May 2022 • Jin Chen, Defu Lian, Yucheng Li, Baoyun Wang, Kai Zheng, Enhong Chen
Recommender retrievers aim to rapidly retrieve a fraction of items from the entire item corpus when a user query requests, with the representative two-tower model trained with the log softmax loss.
no code implementations • 27 Apr 2022 • Gangwei Jiang, Shiyao Wang, Tiezheng Ge, Yuning Jiang, Ying WEI, Defu Lian
The synthetic training images with erasure ground-truth are then fed to train a coarse-to-fine erasing network.
1 code implementation • 18 Apr 2022 • Bisheng Li, Min Zhou, Shengzhong Zhang, Menglin Yang, Defu Lian, Zengfeng Huang
Regarding missing link inference of diverse networks, we revisit the link prediction techniques and identify the importance of both the structural and attribute information.
1 code implementation • 18 Apr 2022 • Menglin Yang, Min Zhou, Jiahong Liu, Defu Lian, Irwin King
Hyperbolic space offers a spacious room to learn embeddings with its negative curvature and metric properties, which can well fit data with tree-like structures.
2 code implementations • 1 Apr 2022 • Shitao Xiao, Zheng Liu, Weihao Han, Jianjin Zhang, Defu Lian, Yeyun Gong, Qi Chen, Fan Yang, Hao Sun, Yingxia Shao, Denvy Deng, Qi Zhang, Xing Xie
We perform comprehensive explorations for the optimal conduct of knowledge distillation, which may provide useful insights for the learning of VQ based ANN index.
no code implementations • Findings (ACL) 2022 • Jiannan Xiang, Huayang Li, Yahui Liu, Lemao Liu, Guoping Huang, Defu Lian, Shuming Shi
Current practices in metric evaluation focus on one single dataset, e. g., Newstest dataset in each year's WMT Metrics Shared Task.
no code implementations • Findings (ACL) 2022 • Jiannan Xiang, Huayang Li, Defu Lian, Guoping Huang, Taro Watanabe, Lemao Liu
To this end, we study the dynamic relationship between the encoded linguistic information and task performance from the viewpoint of Pareto Optimality.
no code implementations • 28 Feb 2022 • Junhan Yang, Zheng Liu, Shitao Xiao, Jianxun Lian, Lijun Wu, Defu Lian, Guangzhong Sun, Xing Xie
Instead of relying on annotation heuristics defined by humans, it leverages the sentence representation model itself and realizes the following iterative self-supervision process: on one hand, the improvement of sentence representation may contribute to the quality of data annotation; on the other hand, more effective data annotation helps to generate high-quality positive samples, which will further improve the current sentence representation model.
no code implementations • 23 Jan 2022 • Chao Feng, Defu Lian, Xiting Wang, Zheng Liu, Xing Xie, Enhong Chen
Instead of searching the nearest neighbor for the query, we search the item with maximum inner product with query on the proximity graph.
2 code implementations • 14 Jan 2022 • Shitao Xiao, Zheng Liu, Weihao Han, Jianjin Zhang, Yingxia Shao, Defu Lian, Chaozhuo Li, Hao Sun, Denvy Deng, Liangjie Zhang, Qi Zhang, Xing Xie
In this work, we tackle this problem with Bi-Granular Document Representation, where the lightweight sparse embeddings are indexed and standby in memory for coarse-grained candidate search, and the heavyweight dense embeddings are hosted in disk for fine-grained post verification.
2 code implementations • NeurIPS 2021 • Huaxiu Yao, Yu Wang, Ying WEI, Peilin Zhao, Mehrdad Mahdavi, Defu Lian, Chelsea Finn
In ATS, for the first time, we design a neural scheduler to decide which meta-training tasks to use next by predicting the probability being sampled for each candidate task, and train the scheduler to optimize the generalization capacity of the meta-model to unseen tasks.
no code implementations • 29 Sep 2021 • Daoyuan Chen, Wuchao Li, Yaliang Li, Bolin Ding, Kai Zeng, Defu Lian, Jingren Zhou
We theoretically analyze prediction error bounds that link $\epsilon$ with data characteristics for an illustrative learned index method.
1 code implementation • 13 Sep 2021 • Jin Chen, Defu Lian, Binbin Jin, Xu Huang, Kai Zheng, Enhong Chen
Variational AutoEncoder (VAE) has been extended as a representative nonlinear method for collaborative filtering.
no code implementations • 28 May 2021 • Yongji Wu, Lu Yin, Defu Lian, Mingyang Yin, Neil Zhenqiang Gong, Jingren Zhou, Hongxia Yang
With the rapid development of these services in the last two decades, users have accumulated a massive amount of behavior data.
1 code implementation • 28 May 2021 • Yongji Wu, Defu Lian, Neil Zhenqiang Gong, Lu Yin, Mingyang Yin, Jingren Zhou, Hongxia Yang
Inspired by the idea of vector quantization that uses cluster centroids to approximate items, we propose LISA (LInear-time Self Attention), which enjoys both the effectiveness of vanilla self-attention and the efficiency of sparse attention.
1 code implementation • Findings (ACL) 2021 • Jiannan Xiang, Yahui Liu, Deng Cai, Huayang Li, Defu Lian, Lemao Liu
An important aspect of developing dialogue systems is how to evaluate and compare the performance of different systems.
1 code implementation • NeurIPS 2021 • Junhan Yang, Zheng Liu, Shitao Xiao, Chaozhuo Li, Defu Lian, Sanjay Agrawal, Amit Singh, Guangzhong Sun, Xing Xie
The representation learning on textual graph is to generate low-dimensional embeddings for the nodes based on the individual textual features and the neighbourhood information.
1 code implementation • 6 May 2021 • Ziniu Wu, Pei Yu, Peilun Yang, Rong Zhu, Yuxing Han, Yaliang Li, Defu Lian, Kai Zeng, Jingren Zhou
We propose to explore the transferabilities of the ML methods both across tasks and across DBs to tackle these fundamental drawbacks.
no code implementations • 22 Apr 2021 • Junhan Yang, Zheng Liu, Bowen Jin, Jianxun Lian, Defu Lian, Akshay Soni, Eun Yong Kang, Yajun Wang, Guangzhong Sun, Xing Xie
For the sake of efficient recommendation, conventional methods would generate user and advertisement embeddings independently with a siamese transformer encoder, such that approximate nearest neighbour search (ANN) can be leveraged.
2 code implementations • 16 Apr 2021 • Shitao Xiao, Zheng Liu, Yingxia Shao, Defu Lian, Xing Xie
In this work, we propose the Matching-oriented Product Quantization (MoPQ), where a novel objective Multinoulli Contrastive Loss (MCL) is formulated.
1 code implementation • 2 Mar 2021 • Jin Chen, Tiezheng Ge, Gangwei Jiang, Zhiqiang Zhang, Defu Lian, Kai Zheng
Based on the tree structure, Thompson sampling is adapted with dynamic programming, leading to efficient exploration for potential ad creatives with the largest CTR.
1 code implementation • 28 Feb 2021 • Jin Chen, Ju Xu, Gangwei Jiang, Tiezheng Ge, Zhiqiang Zhang, Defu Lian, Kai Zheng
However, interactions between creative elements may be more complex than the inner product, and the FM-estimated CTR may be of high variance due to limited feedback.
1 code implementation • 8 Feb 2021 • Shiyao Wang, Qi Liu, Tiezheng Ge, Defu Lian, Zhiqiang Zhang
Creative plays a great important role in e-commerce for exhibiting products.
no code implementations • 2 Nov 2020 • Yan Zhang, Ivor W. Tsang, Hongzhi Yin, Guowu Yang, Defu Lian, Jingjing Li
Specifically, we first pre-train robust item representation from item content data by a Denoising Auto-encoder instead of other deterministic deep learning frameworks; then we finetune the entire framework by adding a pairwise loss objective with discrete constraints; moreover, DPH aims to minimize a pairwise ranking loss that is consistent with the ultimate goal of recommendation.
no code implementations • NeurIPS 2020 • Binbin Jin, Defu Lian, Zheng Liu, Qi Liu, Jianhui Ma, Xing Xie, Enhong Chen
The GAN-style recommenders (i. e., IRGAN) addresses the challenge by learning a generator and a discriminator adversarially, such that the generator produces increasingly difficult samples for the discriminator to accelerate optimizing the discrimination objective.
no code implementations • 24 May 2020 • Le Wu, Yonghui Yang, Lei Chen, Defu Lian, Richang Hong, Meng Wang
The transfer network is designed to approximate the learned item embeddings from graph neural networks by taking each item's visual content as input, in order to tackle the new segment problem in the test phase.
1 code implementation • 12 May 2020 • Hanchen Wang, Defu Lian, Ying Zhang, Lu Qin, Xuemin Lin
We observe that existing works on structured entity interaction prediction cannot properly exploit the unique graph of graphs model.
1 code implementation • International World Wide Web Conference 2020 • Defu Lian, Haoyu Wang, Zheng Liu, Jianxun Lian, Enhong Chen, Xing Xie
On top of such a structure, LightRec will have an item represented as additive composition of B codewords, which are optimally selected from each of the codebooks.
no code implementations • 19 Apr 2020 • Hanchen Wang, Defu Lian, Ying Zhang, Lu Qin, Xiangjian He, Yiguang Lin, Xuemin Lin
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches to binarize the model parameters and learn the compact embedding.
no code implementations • 15 Jul 2019 • Zheng Liu, Yu Xing, Jianxun Lian, Defu Lian, Ziyao Li, Xing Xie
Our work is undergoing a anonymous review, and it will soon be released after the notification.
no code implementations • 5 Jun 2019 • Haoyu Wang, Defu Lian, Yong Ge
Then we distill the ranking information derived from GCN into binarized collaborative filtering, which makes use of binary representation to improve the efficiency of online recommendation.
no code implementations • 27 May 2019 • Hao Wang, Tong Xu, Qi Liu, Defu Lian, Enhong Chen, Dongfang Du, Han Wu, Wen Su
Recently, the Network Representation Learning (NRL) techniques, which represent graph structure via low-dimension vectors to support social-oriented application, have attracted wide attention.
1 code implementation • 13 Feb 2019 • Shoujin Wang, Longbing Cao, Yan Wang, Quan Z. Sheng, Mehmet Orgun, Defu Lian
In recent years, session-based recommender systems (SBRSs) have emerged as a new paradigm of RSs.
2 code implementations • ICDM 2018 • Hong Yang, Shirui Pan, Peng Zhang, Ling Chen, Defu Lian, Chengqi Zhang
To this end, we present a Binarized Attributed Network Embedding model (BANE for short) to learn binary node representation.
Ranked #1 on
Link Prediction
on Wiki