1 code implementation • 5 Feb 2024 • Jianlv Chen, Shitao Xiao, Peitian Zhang, Kun Luo, Defu Lian, Zheng Liu
It can simultaneously perform the three common retrieval functionalities of embedding model: dense retrieval, multi-vector retrieval, and sparse retrieval, which provides a unified model foundation for real-world IR applications.
2 code implementations • ICDM 2018 • Hong Yang, Shirui Pan, Peng Zhang, Ling Chen, Defu Lian, Chengqi Zhang
To this end, we present a Binarized Attributed Network Embedding model (BANE for short) to learn binary node representation.
Ranked #1 on Link Prediction on Wiki
1 code implementation • 13 Feb 2019 • Shoujin Wang, Longbing Cao, Yan Wang, Quan Z. Sheng, Mehmet Orgun, Defu Lian
In recent years, session-based recommender systems (SBRSs) have emerged as a new paradigm of RSs.
1 code implementation • 31 Aug 2023 • Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system.
1 code implementation • 29 Feb 2024 • Yuxuan Lei, Jianxun Lian, Jing Yao, Mingqi Wu, Defu Lian, Xing Xie
Our empirical studies demonstrate that fine-tuning embedding models on the dataset leads to remarkable improvements in a variety of retrieval tasks.
1 code implementation • NeurIPS 2021 • Junhan Yang, Zheng Liu, Shitao Xiao, Chaozhuo Li, Defu Lian, Sanjay Agrawal, Amit Singh, Guangzhong Sun, Xing Xie
The representation learning on textual graph is to generate low-dimensional embeddings for the nodes based on the individual textual features and the neighbourhood information.
1 code implementation • NeurIPS 2023 • Kun Yi, Qi Zhang, Wei Fan, Shoujin Wang, Pengyang Wang, Hui He, Defu Lian, Ning An, Longbing Cao, Zhendong Niu
FreTS mainly involves two stages, (i) Domain Conversion, that transforms time-domain signals into complex numbers of frequency domain; (ii) Frequency Learning, that performs our redesigned MLPs for the learning of real and imaginary part of frequency components.
2 code implementations • 30 Oct 2022 • Leyan Deng, Chenwang Wu, Defu Lian, Min Zhou
In this technical report, we present our solutions to the Traffic4cast 2022 core challenge and extended challenge.
1 code implementation • 14 Mar 2023 • Moritz Neun, Christian Eichenberger, Henry Martin, Markus Spanring, Rahul Siripurapu, Daniel Springer, Leyan Deng, Chenwang Wu, Defu Lian, Min Zhou, Martin Lumiste, Andrei Ilie, Xinhua Wu, Cheng Lyu, Qing-Long Lu, Vishal Mahajan, Yichao Lu, Jiezhang Li, Junjun Li, Yue-Jiao Gong, Florian Grötschla, Joël Mathys, Ye Wei, He Haitao, Hui Fang, Kevin Malm, Fei Tang, Michael Kopp, David Kreil, Sepp Hochreiter
We only provide vehicle count data from spatially sparse stationary vehicle detectors in these three cities as model input for this task.
2 code implementations • 16 Apr 2021 • Shitao Xiao, Zheng Liu, Yingxia Shao, Defu Lian, Xing Xie
In this work, we propose the Matching-oriented Product Quantization (MoPQ), where a novel objective Multinoulli Contrastive Loss (MCL) is formulated.
2 code implementations • 14 Jan 2022 • Shitao Xiao, Zheng Liu, Weihao Han, Jianjin Zhang, Yingxia Shao, Defu Lian, Chaozhuo Li, Hao Sun, Denvy Deng, Liangjie Zhang, Qi Zhang, Xing Xie
In this work, we tackle this problem with Bi-Granular Document Representation, where the lightweight sparse embeddings are indexed and standby in memory for coarse-grained candidate search, and the heavyweight dense embeddings are hosted in disk for fine-grained post verification.
2 code implementations • 1 Apr 2022 • Shitao Xiao, Zheng Liu, Weihao Han, Jianjin Zhang, Defu Lian, Yeyun Gong, Qi Chen, Fan Yang, Hao Sun, Yingxia Shao, Denvy Deng, Qi Zhang, Xing Xie
We perform comprehensive explorations for the optimal conduct of knowledge distillation, which may provide useful insights for the learning of VQ based ANN index.
1 code implementation • 8 Feb 2021 • Shiyao Wang, Qi Liu, Tiezheng Ge, Defu Lian, Zhiqiang Zhang
Creative plays a great important role in e-commerce for exhibiting products.
1 code implementation • Findings (ACL) 2021 • Jiannan Xiang, Yahui Liu, Deng Cai, Huayang Li, Defu Lian, Lemao Liu
An important aspect of developing dialogue systems is how to evaluate and compare the performance of different systems.
2 code implementations • NeurIPS 2021 • Huaxiu Yao, Yu Wang, Ying WEI, Peilin Zhao, Mehrdad Mahdavi, Defu Lian, Chelsea Finn
In ATS, for the first time, we design a neural scheduler to decide which meta-training tasks to use next by predicting the probability being sampled for each candidate task, and train the scheduler to optimize the generalization capacity of the meta-model to unseen tasks.
1 code implementation • International World Wide Web Conference 2020 • Defu Lian, Haoyu Wang, Zheng Liu, Jianxun Lian, Enhong Chen, Xing Xie
On top of such a structure, LightRec will have an item represented as additive composition of B codewords, which are optimally selected from each of the codebooks.
1 code implementation • EMNLP 2021 • Shitao Xiao, Zheng Liu, Yingxia Shao, Defu Lian, Xing Xie
In this work, we propose the Matching-oriented Product Quantization (MoPQ), where a novel objective Multinoulli Contrastive Loss (MCL) is formulated.
1 code implementation • 12 May 2020 • Hanchen Wang, Defu Lian, Ying Zhang, Lu Qin, Xuemin Lin
We observe that existing works on structured entity interaction prediction cannot properly exploit the unique graph of graphs model.
1 code implementation • 6 May 2021 • Ziniu Wu, Pei Yu, Peilun Yang, Rong Zhu, Yuxing Han, Yaliang Li, Defu Lian, Kai Zeng, Jingren Zhou
We propose to explore the transferabilities of the ML methods both across tasks and across DBs to tackle these fundamental drawbacks.
1 code implementation • 28 May 2021 • Yongji Wu, Defu Lian, Neil Zhenqiang Gong, Lu Yin, Mingyang Yin, Jingren Zhou, Hongxia Yang
Inspired by the idea of vector quantization that uses cluster centroids to approximate items, we propose LISA (LInear-time Self Attention), which enjoys both the effectiveness of vanilla self-attention and the efficiency of sparse attention.
1 code implementation • 13 Feb 2023 • Lei Chen, Le Wu, Kun Zhang, Richang Hong, Defu Lian, Zhiqiang Zhang, Jun Zhou, Meng Wang
We augment imbalanced training data towards balanced data distribution to improve fairness.
1 code implementation • 28 Feb 2021 • Jin Chen, Ju Xu, Gangwei Jiang, Tiezheng Ge, Zhiqiang Zhang, Defu Lian, Kai Zheng
However, interactions between creative elements may be more complex than the inner product, and the FM-estimated CTR may be of high variance due to limited feedback.
1 code implementation • 2 Mar 2021 • Jin Chen, Tiezheng Ge, Gangwei Jiang, Zhiqiang Zhang, Defu Lian, Kai Zheng
Based on the tree structure, Thompson sampling is adapted with dynamic programming, leading to efficient exploration for potential ad creatives with the largest CTR.
1 code implementation • 18 Apr 2022 • Menglin Yang, Min Zhou, Jiahong Liu, Defu Lian, Irwin King
Hyperbolic space offers a spacious room to learn embeddings with its negative curvature and metric properties, which can well fit data with tree-like structures.
1 code implementation • 6 Nov 2023 • Mingjia Yin, Hao Wang, Xiang Xu, Likang Wu, Sirui Zhao, Wei Guo, Yong liu, Ruiming Tang, Defu Lian, Enhong Chen
To this end, we propose a graph-driven framework, named Adaptive and Personalized Graph Learning for Sequential Recommendation (APGL4SR), that incorporates adaptive and personalized global collaborative information into sequential recommendation systems.
1 code implementation • 28 Jun 2022 • Xu Huang, Defu Lian, Jin Chen, Zheng Liu, Xing Xie, Enhong Chen
Deep recommender systems (DRS) are intensively applied in modern web services.
1 code implementation • 5 Jun 2023 • Zhaoyi Li, Ying WEI, Defu Lian
Despite the rising prevalence of neural sequence models, recent empirical evidences suggest their deficiency in compositional generalization.
1 code implementation • 19 Oct 2023 • Gangwei Jiang, Caigao Jiang, Siqiao Xue, James Y. Zhang, Jun Zhou, Defu Lian, Ying WEI
In this work, we first investigate such anytime fine-tuning effectiveness of existing continual pre-training approaches, concluding with unanimously decreased performance on unseen domains.
1 code implementation • 18 Apr 2022 • Bisheng Li, Min Zhou, Shengzhong Zhang, Menglin Yang, Defu Lian, Zengfeng Huang
Regarding missing link inference of diverse networks, we revisit the link prediction techniques and identify the importance of both the structural and attribute information.
1 code implementation • 13 Sep 2021 • Jin Chen, Defu Lian, Binbin Jin, Xu Huang, Kai Zheng, Enhong Chen
Variational AutoEncoder (VAE) has been extended as a representative nonlinear method for collaborative filtering.
1 code implementation • 5 Apr 2024 • Tianqi Zhong, Zhaoyi Li, Quan Wang, Linqi Song, Ying WEI, Defu Lian, Zhendong Mao
Compositional generalization, representing the model's ability to generate text with new attribute combinations obtained by recombining single attributes from the training data, is a crucial property for multi-aspect controllable text generation (MCTG) methods.
1 code implementation • CIKM 2023 • Yongfu Fan, Jin Chen, Yongquan Jiang, Defu Lian, Fangda Guo, Kai Zheng
Recommendation retrievers commonly retrieve user potentially preferred items from numerous items, where the query and item representation are learned according to the dual encoders with the log-softmax loss.
1 code implementation • 26 Sep 2023 • Zhihao Shi, Jie Wang, Fanghua Lu, Hanzhu Chen, Defu Lian, Zheng Wang, Jieping Ye, Feng Wu
The inverse mapping leads to an objective function that is equivalent to that by the joint training, while it can effectively incorporate GNNs in the training phase of NEs against the learning bias.
Ranked #1 on Node Property Prediction on ogbn-proteins
1 code implementation • 23 Jan 2024 • Qingyang Wang, Chenwang Wu, Defu Lian, Enhong Chen
Consequently, we put forth a Game-based Co-training Attack (GCoAttack), which frames the proposed CoAttack and TCD as a game-theoretic process, thoroughly exploring CoAttack's attack potential in the cooperative training of attack and defense.
no code implementations • 27 May 2019 • Hao Wang, Tong Xu, Qi Liu, Defu Lian, Enhong Chen, Dongfang Du, Han Wu, Wen Su
Recently, the Network Representation Learning (NRL) techniques, which represent graph structure via low-dimension vectors to support social-oriented application, have attracted wide attention.
no code implementations • 5 Jun 2019 • Haoyu Wang, Defu Lian, Yong Ge
Then we distill the ranking information derived from GCN into binarized collaborative filtering, which makes use of binary representation to improve the efficiency of online recommendation.
no code implementations • 15 Jul 2019 • Zheng Liu, Yu Xing, Jianxun Lian, Defu Lian, Ziyao Li, Xing Xie
Our work is undergoing a anonymous review, and it will soon be released after the notification.
no code implementations • 19 Apr 2020 • Hanchen Wang, Defu Lian, Ying Zhang, Lu Qin, Xiangjian He, Yiguang Lin, Xuemin Lin
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches to binarize the model parameters and learn the compact embedding.
no code implementations • 24 May 2020 • Le Wu, Yonghui Yang, Lei Chen, Defu Lian, Richang Hong, Meng Wang
The transfer network is designed to approximate the learned item embeddings from graph neural networks by taking each item's visual content as input, in order to tackle the new segment problem in the test phase.
no code implementations • NeurIPS 2020 • Binbin Jin, Defu Lian, Zheng Liu, Qi Liu, Jianhui Ma, Xing Xie, Enhong Chen
The GAN-style recommenders (i. e., IRGAN) addresses the challenge by learning a generator and a discriminator adversarially, such that the generator produces increasingly difficult samples for the discriminator to accelerate optimizing the discrimination objective.
no code implementations • 2 Nov 2020 • Yan Zhang, Ivor W. Tsang, Hongzhi Yin, Guowu Yang, Defu Lian, Jingjing Li
Specifically, we first pre-train robust item representation from item content data by a Denoising Auto-encoder instead of other deterministic deep learning frameworks; then we finetune the entire framework by adding a pairwise loss objective with discrete constraints; moreover, DPH aims to minimize a pairwise ranking loss that is consistent with the ultimate goal of recommendation.
no code implementations • 22 Apr 2021 • Junhan Yang, Zheng Liu, Bowen Jin, Jianxun Lian, Defu Lian, Akshay Soni, Eun Yong Kang, Yajun Wang, Guangzhong Sun, Xing Xie
For the sake of efficient recommendation, conventional methods would generate user and advertisement embeddings independently with a siamese transformer encoder, such that approximate nearest neighbour search (ANN) can be leveraged.
no code implementations • 28 May 2021 • Yongji Wu, Lu Yin, Defu Lian, Mingyang Yin, Neil Zhenqiang Gong, Jingren Zhou, Hongxia Yang
With the rapid development of these services in the last two decades, users have accumulated a massive amount of behavior data.
no code implementations • 29 Sep 2021 • Daoyuan Chen, Wuchao Li, Yaliang Li, Bolin Ding, Kai Zeng, Defu Lian, Jingren Zhou
We theoretically analyze prediction error bounds that link $\epsilon$ with data characteristics for an illustrative learned index method.
no code implementations • 23 Jan 2022 • Chao Feng, Defu Lian, Xiting Wang, Zheng Liu, Xing Xie, Enhong Chen
Instead of searching the nearest neighbor for the query, we search the item with maximum inner product with query on the proximity graph.
no code implementations • 28 Feb 2022 • Junhan Yang, Zheng Liu, Shitao Xiao, Jianxun Lian, Lijun Wu, Defu Lian, Guangzhong Sun, Xing Xie
Instead of relying on annotation heuristics defined by humans, it leverages the sentence representation model itself and realizes the following iterative self-supervision process: on one hand, the improvement of sentence representation may contribute to the quality of data annotation; on the other hand, more effective data annotation helps to generate high-quality positive samples, which will further improve the current sentence representation model.
no code implementations • Findings (ACL) 2022 • Jiannan Xiang, Huayang Li, Defu Lian, Guoping Huang, Taro Watanabe, Lemao Liu
To this end, we study the dynamic relationship between the encoded linguistic information and task performance from the viewpoint of Pareto Optimality.
no code implementations • Findings (ACL) 2022 • Jiannan Xiang, Huayang Li, Yahui Liu, Lemao Liu, Guoping Huang, Defu Lian, Shuming Shi
Current practices in metric evaluation focus on one single dataset, e. g., Newstest dataset in each year's WMT Metrics Shared Task.
no code implementations • 27 Apr 2022 • Gangwei Jiang, Shiyao Wang, Tiezheng Ge, Yuning Jiang, Ying WEI, Defu Lian
The synthetic training images with erasure ground-truth are then fed to train a coarse-to-fine erasing network.
no code implementations • 30 May 2022 • Jin Chen, Defu Lian, Yucheng Li, Baoyun Wang, Kai Zheng, Enhong Chen
Recommender retrievers aim to rapidly retrieve a fraction of items from the entire item corpus when a user query requests, with the representative two-tower model trained with the log softmax loss.
1 code implementation • 17 Jun 2022 • Chenwang Wu, Defu Lian, Yong Ge, Min Zhou, Enhong Chen, DaCheng Tao
Second, considering that MixFM may generate redundant or even detrimental instances, we further put forward a novel Factorization Machine powered by Saliency-guided Mixup (denoted as SMFM).
no code implementations • 25 Oct 2022 • Qingyang Wang, Defu Lian, Chenwang Wu, Enhong Chen
Notably, TCD adds pseudo label data instead of deleting abnormal data, which avoids the cleaning of normal data, and the cooperative training of the three models is also beneficial to model generalization.
no code implementations • 15 Nov 2022 • Zhihao Zhu, Chenwang Wu, Min Zhou, Hao Liao, Defu Lian, Enhong Chen
Recent studies show that Graph Neural Networks(GNNs) are vulnerable and easily fooled by small perturbations, which has raised considerable concerns for adapting GNNs in various safety-critical applications.
no code implementations • 1 Mar 2023 • Yongqiang Han, Likang Wu, Hao Wang, Guifeng Wang, Mengdi Zhang, Zhi Li, Defu Lian, Enhong Chen
Sequential Recommendation is a widely studied paradigm for learning users' dynamic interests from historical interactions for predicting the next potential item.
no code implementations • 11 Jul 2023 • Zheli Xiong, Defu Lian, Enhong Chen, Gang Chen, Xiaomin Cheng
To this end, this paper proposes an integrated method, which uses deep learning methods to infer the structure of OD sequence and uses structural constraints to guide traditional numerical optimization.
no code implementations • 31 Jul 2023 • Jin Chen, Zheng Liu, Xu Huang, Chenwang Wu, Qi Liu, Gangwei Jiang, Yuanhao Pu, Yuxuan Lei, Xiaolong Chen, Xingmei Wang, Defu Lian, Enhong Chen
The advent of large language models marks a revolutionary breakthrough in artificial intelligence.
no code implementations • 11 Aug 2023 • Qi Liu, Zhilong Zhou, Gangwei Jiang, Tiezheng Ge, Defu Lian
In this paper, we focus on the bottom representation learning of MTL in RS and propose the Deep Task-specific Bottom Representation Network (DTRN) to alleviate the negative transfer problem.
no code implementations • 15 Aug 2023 • Likang Wu, Junji Jiang, Hongke Zhao, Hao Wang, Defu Lian, Mengdi Zhang, Enhong Chen
However, the multi-faceted semantic orientation in the feature-semantic alignment has been neglected by previous work, i. e. the content of a node usually covers diverse topics that are relevant to the semantics of multiple labels.
no code implementations • 4 Sep 2023 • Jin Zhang, Defu Lian, Hong Xie, Yawen Li, Enhong Chen
Furthermore, we employ Bayesian meta-learning methods to effectively address the cold-start problem and derive theoretical regret bounds for our proposed method, ensuring a robust performance guarantee.
no code implementations • 29 Sep 2023 • Yichang Xu, Chenwang Wu, Defu Lian
Recommender systems have been shown to be vulnerable to poisoning attacks, where malicious data is injected into the dataset to cause the recommender system to provide biased recommendations.
no code implementations • 9 Oct 2023 • Zheli Xiong, Defu Lian, Enhong Chen, Gang Chen, Xiaomin Cheng
To alleviate this problem, some researchers incorporate a prior OD matrix as a target in the regression to provide more structural constraints.
no code implementations • 20 Oct 2023 • Xu Huang, Jianxun Lian, Hao Wang, Defu Lian, Xing Xie
Recommendation systems effectively guide users in locating their desired information within extensive content repositories.
no code implementations • 18 Nov 2023 • Yuxuan Lei, Jianxun Lian, Jing Yao, Xu Huang, Defu Lian, Xing Xie
Behavior alignment operates in the language space, representing user preferences and item information as text to learn the recommendation model's behavior; intention alignment works in the latent space of the recommendation model, using user and item representations to understand the model's behavior; hybrid alignment combines both language and latent spaces for alignment training.
no code implementations • 15 Nov 2023 • Qi Liu, Xuyang Hou, Haoran Jin, Jin Chen, Zhe Wang, Defu Lian, Tan Qu, Jia Cheng, Jun Lei
The insights from this subset reveal the user's decision-making process related to the candidate item, improving prediction accuracy.
no code implementations • 11 Dec 2023 • Ruimeng Li, Yuanhao Pu, Zhaoyi Li, Hong Xie, Defu Lian
This paper considers the out-of-distribution (OOD) generalization problem under the setting that both style distribution shift and spurious features exist and domain labels are missing.
no code implementations • 9 Dec 2023 • Qi Liu, Xuyang Hou, Defu Lian, Zhe Wang, Haoran Jin, Jia Cheng, Jun Lei
Most existing methods focus on the network architecture design of the CTR model for better accuracy and suffer from the data sparsity problem.
no code implementations • 18 Dec 2023 • Zhihao Zhu, Chenwang Wu, Rui Fan, Yi Yang, Defu Lian, Enhong Chen
Recent research demonstrates that GNNs are vulnerable to the model stealing attack, a nefarious endeavor geared towards duplicating the target model via query permissions.
no code implementations • 18 Dec 2023 • Zhihao Zhu, Rui Fan, Chenwang Wu, Yi Yang, Defu Lian, Enhong Chen
Some adversarial attacks have achieved model stealing attacks against recommender systems, to some extent, by collecting abundant training data of the target model (target data) or making a mass of queries.
no code implementations • 5 Feb 2024 • Xu Huang, Weiwen Liu, Xiaolong Chen, Xingmei Wang, Hao Wang, Defu Lian, Yasheng Wang, Ruiming Tang, Enhong Chen
As Large Language Models (LLMs) have shown significant intelligence, the progress to leverage LLMs as planning modules of autonomous agents has attracted more attention.
no code implementations • 22 Feb 2024 • Zhaoyi Li, Gangwei Jiang, Hong Xie, Linqi Song, Defu Lian, Ying WEI
LLMs have marked a revolutonary shift, yet they falter when faced with compositional reasoning tasks.
no code implementations • 26 Feb 2024 • Hantao Yang, Xutong Liu, Zhiyong Wang, Hong Xie, John C. S. Lui, Defu Lian, Enhong Chen
We study the problem of federated contextual combinatorial cascading bandits, where $|\mathcal{U}|$ agents collaborate under the coordination of a central server to provide tailored recommendations to the $|\mathcal{U}|$ corresponding users.
1 code implementation • 13 Mar 2024 • Pengfei Zheng, Yonggang Zhang, Zhen Fang, Tongliang Liu, Defu Lian, Bo Han
Hence, NoiseDiffusion performs interpolation within the noisy image space and injects raw images into these noisy counterparts to address the challenge of information loss.
no code implementations • 14 Mar 2024 • Guanghua Li, Wensheng Lu, Wei zhang, Defu Lian, Kezhong Lu, Rui Mao, Kai Shu, Hao Liao
The proliferation of fake news has had far-reaching implications on politics, the economy, and society at large.
no code implementations • 26 Mar 2024 • Yongqiang Han, Hao Wang, Kefan Wang, Likang Wu, Zhi Li, Wei Guo, Yong liu, Defu Lian, Enhong Chen
In recommendation systems, users frequently engage in multiple types of behaviors, such as clicking, adding to a cart, and purchasing.
no code implementations • 30 Mar 2024 • Luankang Zhang, Hao Wang, Suojuan Zhang, Mingjia Yin, Yongqiang Han, Jiaqing Zhang, Defu Lian, Enhong Chen
To this end, we propose a Unified Framework for Adaptive Representation Enhancement and Inversed Learning in Cross-Domain Recommendation (AREIL).
no code implementations • 11 Apr 2024 • Xu Huang, Weiwen Liu, Xiaolong Chen, Xingmei Wang, Defu Lian, Yasheng Wang, Ruiming Tang, Enhong Chen
Concretely, WESE involves decoupling the exploration and exploitation process, employing a cost-effective weak agent to perform exploration tasks for global knowledge.