1 code implementation • EMNLP 2021 • Shitao Xiao, Zheng Liu, Yingxia Shao, Defu Lian, Xing Xie
In this work, we propose the Matching-oriented Product Quantization (MoPQ), where a novel objective Multinoulli Contrastive Loss (MCL) is formulated.
1 code implementation • 1 Dec 2024 • Wei Guo, Hao Wang, Luankang Zhang, Jin Yao Chin, Zhongzhou Liu, Kai Cheng, Qiushi Pan, Yi Quan Lee, Wanqi Xue, Tingjia Shen, Kenan Song, Kefan Wang, Wenjia Xie, Yuyang Ye, Huifeng Guo, Yong liu, Defu Lian, Ruiming Tang, Enhong Chen
In this paper, we aim to enhance the understanding of scaling laws by conducting comprehensive evaluations of large recommendation models.
no code implementations • 30 Nov 2024 • Tingjia Shen, Hao Wang, Chuhan Wu, Jin Yao Chin, Wei Guo, Yong liu, Huifeng Guo, Defu Lian, Ruiming Tang, Enhong Chen
In response, we introduce the Performance Law for SR models, which aims to theoretically investigate and model the relationship between model performance and data quality.
1 code implementation • 22 Nov 2024 • Xiang Xu, Hao Wang, Wei Guo, Luankang Zhang, Wanshan Yang, Runlong Yu, Yong liu, Defu Lian, Enhong Chen
Recent advancements have shown that modeling rich user behaviors can significantly improve the performance of CTR prediction.
no code implementations • 5 Nov 2024 • Zhihao Zhu, Yi Yang, Defu Lian
Training Data Detection (TDD) is a task aimed at determining whether a specific data instance is used to train a machine learning model.
1 code implementation • 3 Nov 2024 • Kun Yi, Jingru Fei, Qi Zhang, Hui He, Shufeng Hao, Defu Lian, Wei Fan
While numerous forecasters have been proposed using different network architectures, the Transformer-based models have state-of-the-art performance in time series forecasting.
no code implementations • 31 Oct 2024 • Wenjia Xie, Hao Wang, Luankang Zhang, Rui Zhou, Defu Lian, Enhong Chen
Sequential recommendation (SR) aims to predict items that users may be interested in based on their historical behavior sequences.
1 code implementation • 28 Oct 2024 • Qi Liu, Kai Zheng, Rui Huang, Wuchao Li, Kuo Cai, Yuan Chai, Yanan Niu, Yiqun Hui, Bing Han, Na Mou, Hongning Wang, Wentian Bao, Yunen Yu, Guorui Zhou, Han Li, Yang song, Defu Lian, Kun Gai
Industrial recommendation systems (RS) rely on the multi-stage pipeline to balance effectiveness and efficiency when delivering items from a vast corpus to users.
1 code implementation • 9 Oct 2024 • Weichuan Wang, Zhaoyi Li, Defu Lian, Chen Ma, Linqi Song, Ying WEI
In the work, we focus on utilizing LLMs to perform machine translation, where we observe that two patterns of errors frequently occur and drastically affect the translation quality: language mismatch and repetition.
1 code implementation • 8 Oct 2024 • Junxiong Tong, Mingjia Yin, Hao Wang, Qiushi Pan, Defu Lian, Enhong Chen
Cross-domain Recommendation systems leverage multi-domain user interactions to improve performance, especially in sparse data or new user scenarios.
no code implementations • 24 Sep 2024 • Zheng Liu, Chenyuan Wu, Ninglu Shao, Shitao Xiao, Chaozhuo Li, Defu Lian
In this approach, the retrieved contexts are compressed into compact embeddings before being encoded by the LLMs.
1 code implementation • 24 Sep 2024 • Chaofan Li, Minghao Qin, Shitao Xiao, Jianlyu Chen, Kun Luo, Yingxia Shao, Defu Lian, Zheng Liu
To this end, we introduce a novel model bge-en-icl, which employs few-shot examples to produce high-quality text embeddings.
1 code implementation • 21 Sep 2024 • Yuqing Huang, Rongyang Zhang, Xuesong He, Xuyang Zhi, Hao Wang, Xin Li, Feiyang Xu, Deguang Liu, Huadong Liang, Yi Li, Jian Cui, Zimu Liu, Shijin Wang, Guoping Hu, Guiquan Liu, Qi Liu, Defu Lian, Enhong Chen
To this end, we propose \textbf{\textit{ChemEval}}, which provides a comprehensive assessment of the capabilities of LLMs across a wide range of chemical domain tasks.
no code implementations • 2 Sep 2024 • Weiwen Liu, Xu Huang, Xingshan Zeng, Xinlong Hao, Shuai Yu, Dexun Li, Shuai Wang, Weinan Gan, Zhengying Liu, Yuanqing Yu, Zezhong Wang, Yuxian Wang, Wu Ning, Yutai Hou, Bin Wang, Chuhan Wu, Xinzhi Wang, Yong liu, Yasheng Wang, Duyu Tang, Dandan Tu, Lifeng Shang, Xin Jiang, Ruiming Tang, Defu Lian, Qun Liu, Enhong Chen
Function calling significantly extends the application boundary of large language models, where high-quality and diverse training data is critical for unlocking this capability.
no code implementations • 29 Aug 2024 • Qi Liu, Xingyuan Tang, Jianqiang Huang, Xiangqian Yu, Haoran Jin, Jin Chen, Yuanhao Pu, Defu Lian, Tan Qu, Zhe Wang, Jia Cheng, Jun Lei
The challenges include the inefficiencies arising from the management of extensive source data and the problem of 'catastrophic forgetting' that results from the CTR model's daily updating.
no code implementations • 22 Aug 2024 • Wuchao Li, Rui Huang, Haijun Zhao, Chi Liu, Kai Zheng, Qi Liu, Na Mou, Guorui Zhou, Defu Lian, Yang song, Wentian Bao, Enyun Yu, Wenwu Ou
Nevertheless, a straightforward combination of SR and DM leads to sub-optimal performance due to discrepancies in learning objectives (recommendation vs. noise reconstruction) and the respective learning spaces (non-stationary vs. stationary).
no code implementations • 21 Aug 2024 • Hao Wang, Yongqiang Han, Kefan Wang, Kai Cheng, Zhen Wang, Wei Guo, Yong liu, Defu Lian, Enhong Chen
Its objective is to extract knowledge from extensive pre-training data and fine-tune the model for downstream tasks.
1 code implementation • 21 Aug 2024 • Ze Liu, Jin Zhang, Chao Feng, Defu Lian, Jie Wang, Enhong Chen
Although advancements in deep learning have significantly enhanced the recommendation accuracy of deep recommendation models, these methods still suffer from low recommendation efficiency.
no code implementations • 20 Aug 2024 • Hong Xie, Jinyu Mo, Defu Lian, Jie Wang, Enhong Chen
We also design an iterative distributed algorithm for players to commit to an optimal arm pulling profile with a constant number of rounds in expectation.
no code implementations • 20 Aug 2024 • Hong Xie, Mingze Zhong, Defu Lian, Zhen Wang, Enhong Chen
We also study the speed of convergence numerically and reveal trade-offs in selecting rating aggregation rules and review selection mechanisms.
no code implementations • 15 Aug 2024 • Xihong Yang, Heming Jing, Zixing Zhang, Jindong Wang, Huakang Niu, Shuaiqiang Wang, Yu Lu, Junfeng Wang, Dawei Yin, Xinwang Liu, En Zhu, Defu Lian, Erxue Min
In this work, we prove that directly aligning the representations of LLMs and collaborative models is sub-optimal for enhancing downstream recommendation tasks performance, based on the information theorem.
no code implementations • 22 Jul 2024 • Xihong Yang, Yiqi Wang, Jin Chen, Wenqi Fan, Xiangyu Zhao, En Zhu, Xinwang Liu, Defu Lian
To address this challenge, we propose a novel Dual Test-Time-Training framework for OOD Recommendation, termed DT3OR.
2 code implementations • 9 Jul 2024 • Mingjia Yin, Chuhan Wu, YuFei Wang, Hao Wang, Wei Guo, Yasheng Wang, Yong liu, Ruiming Tang, Defu Lian, Enhong Chen
Inspired by the information compression nature of LLMs, we uncover an ``entropy law'' that connects LLM performance with data compression ratio and first-epoch training loss, which reflect the information redundancy of a dataset and the mastery of inherent knowledge encoded in this dataset, respectively.
no code implementations • 3 Jul 2024 • Yu Huang, Min Zhou, Menglin Yang, Zhen Wang, Muhan Zhang, Jie Wang, Hong Xie, Hao Wang, Defu Lian, Enhong Chen
Recent advancements in graph learning have revolutionized the way to understand and analyze data with complex structures.
no code implementations • 18 Jun 2024 • Gangwei Jiang, Zhaoyi Li, Defu Lian, Ying WEI
Fine-tuning large language models (LLMs) can cause them to lose their general capabilities.
1 code implementation • 18 Jun 2024 • Chenyuan Wu, Gangwei Jiang, Defu Lian
Lifelong prompt tuning has significantly advanced parameter-efficient lifelong learning with its efficiency and minimal storage demands on various tasks.
1 code implementation • 5 Jun 2024 • Tingjia Shen, Hao Wang, Jiaqing Zhang, Sirui Zhao, Liangyue Li, Zulong Chen, Defu Lian, Enhong Chen
To this end, we propose a novel framework named URLLM, which aims to improve the CDSR performance by exploring the User Retrieval approach and domain grounding on LLM simultaneously.
1 code implementation • 3 Jun 2024 • Tianjing Zeng, Junwei Lan, Jiahong Ma, Wenqing Wei, Rong Zhu, Pengfei Li, Bolin Ding, Defu Lian, Zhewei Wei, Jingren Zhou
It is generally applicable to any unseen new database to attain high estimation accuracy, while its preparation cost is as little as the basic one-dimensional histogram-based CardEst methods.
1 code implementation • 28 May 2024 • Mingjia Yin, Hao Wang, Wei Guo, Yong liu, Suojuan Zhang, Sirui Zhao, Defu Lian, Enhong Chen
The sequential recommender (SR) system is a crucial component of modern recommender systems, as it aims to capture the evolving preferences of users.
no code implementations • 21 May 2024 • Mingjia Yin, Hao Wang, Wei Guo, Yong liu, Zhi Li, Sirui Zhao, Zhen Wang, Defu Lian, Enhong Chen
Cross-domain sequential recommendation (CDSR) aims to uncover and transfer users' sequential preferences across multiple recommendation domains.
1 code implementation • 17 May 2024 • Xingmei Wang, Weiwen Liu, Xiaolong Chen, Qi Liu, Xu Huang, Yichao Wang, Xiangyang Li, Yasheng Wang, Zhenhua Dong, Defu Lian, Ruiming Tang
This model-agnostic framework can be equipped with plug-and-play textual features, with item-level alignment enhancing the utilization of external information while maintaining training and inference efficiency.
no code implementations • 10 May 2024 • Yichen Qian, Yongyi He, Rong Zhu, Jintao Huang, Zhijian Ma, Haibin Wang, Yaohua Wang, Xiuyu Sun, Defu Lian, Bolin Ding, Jingren Zhou
In this paper, inspired by the cross-task generality of LLMs on NLP tasks, we pave the first step to design an automatic and general solution to tackle with data manipulation tasks.
1 code implementation • 29 Apr 2024 • Meng Li, Haoran Jin, Ruixuan Huang, Zhihao Xu, Defu Lian, Zijia Lin, Di Zhang, Xiting Wang
Based on this, we quantify the faithfulness of a concept explanation via perturbation.
no code implementations • 25 Apr 2024 • Zhihao Zhu, Ninglu Shao, Defu Lian, Chenwang Wu, Zheng Liu, Yi Yang, Enhong Chen
Large language models (LLMs) show early signs of artificial general intelligence but struggle with hallucinations.
no code implementations • 11 Apr 2024 • Xu Huang, Weiwen Liu, Xiaolong Chen, Xingmei Wang, Defu Lian, Yasheng Wang, Ruiming Tang, Enhong Chen
Concretely, WESE involves decoupling the exploration and exploitation process, employing a cost-effective weak agent to perform exploration tasks for global knowledge.
1 code implementation • 5 Apr 2024 • Tianqi Zhong, Zhaoyi Li, Quan Wang, Linqi Song, Ying WEI, Defu Lian, Zhendong Mao
Compositional generalization, representing the model's ability to generate text with new attribute combinations obtained by recombining single attributes from the training data, is a crucial property for multi-aspect controllable text generation (MCTG) methods.
no code implementations • 30 Mar 2024 • Luankang Zhang, Hao Wang, Suojuan Zhang, Mingjia Yin, Yongqiang Han, Jiaqing Zhang, Defu Lian, Enhong Chen
To this end, we propose a Unified Framework for Adaptive Representation Enhancement and Inversed Learning in Cross-Domain Recommendation (AREIL).
no code implementations • 26 Mar 2024 • Yongqiang Han, Hao Wang, Kefan Wang, Likang Wu, Zhi Li, Wei Guo, Yong liu, Defu Lian, Enhong Chen
In recommendation systems, users frequently engage in multiple types of behaviors, such as clicking, adding to a cart, and purchasing.
4 code implementations • 19 Mar 2024 • Mingyue Cheng, Xiaoyu Tao, Qi Liu, Hao Zhang, Yiheng Chen, Defu Lian
To address this challenge, we propose CrossTimeNet, a novel cross-domain SSL learning framework to learn transferable knowledge from various domains to largely benefit the target downstream task.
no code implementations • 14 Mar 2024 • Guanghua Li, Wensheng Lu, Wei zhang, Defu Lian, Kezhong Lu, Rui Mao, Kai Shu, Hao Liao
The proliferation of fake news has had far-reaching implications on politics, the economy, and society at large.
1 code implementation • 13 Mar 2024 • Pengfei Zheng, Yonggang Zhang, Zhen Fang, Tongliang Liu, Defu Lian, Bo Han
Hence, NoiseDiffusion performs interpolation within the noisy image space and injects raw images into these noisy counterparts to address the challenge of information loss.
1 code implementation • 29 Feb 2024 • Yuxuan Lei, Jianxun Lian, Jing Yao, Mingqi Wu, Defu Lian, Xing Xie
Our empirical studies demonstrate that fine-tuning embedding models on the dataset leads to remarkable improvements in a variety of retrieval tasks.
no code implementations • 26 Feb 2024 • Hantao Yang, Xutong Liu, Zhiyong Wang, Hong Xie, John C. S. Lui, Defu Lian, Enhong Chen
We study the problem of federated contextual combinatorial cascading bandits, where $|\mathcal{U}|$ agents collaborate under the coordination of a central server to provide tailored recommendations to the $|\mathcal{U}|$ corresponding users.
no code implementations • 22 Feb 2024 • Zhaoyi Li, Gangwei Jiang, Hong Xie, Linqi Song, Defu Lian, Ying WEI
LLMs have marked a revolutonary shift, yet they falter when faced with compositional reasoning tasks.
3 code implementations • 5 Feb 2024 • Jianlv Chen, Shitao Xiao, Peitian Zhang, Kun Luo, Defu Lian, Zheng Liu
It can simultaneously perform the three common retrieval functionalities of embedding model: dense retrieval, multi-vector retrieval, and sparse retrieval, which provides a unified model foundation for real-world IR applications.
no code implementations • 5 Feb 2024 • Xu Huang, Weiwen Liu, Xiaolong Chen, Xingmei Wang, Hao Wang, Defu Lian, Yasheng Wang, Ruiming Tang, Enhong Chen
As Large Language Models (LLMs) have shown significant intelligence, the progress to leverage LLMs as planning modules of autonomous agents has attracted more attention.
1 code implementation • 23 Jan 2024 • Qingyang Wang, Chenwang Wu, Defu Lian, Enhong Chen
Consequently, we put forth a Game-based Co-training Attack (GCoAttack), which frames the proposed CoAttack and TCD as a game-theoretic process, thoroughly exploring CoAttack's attack potential in the cooperative training of attack and defense.
no code implementations • 18 Dec 2023 • Zhihao Zhu, Chenwang Wu, Rui Fan, Yi Yang, Zhen Wang, Defu Lian, Enhong Chen
Recent research demonstrates that GNNs are vulnerable to the model stealing attack, a nefarious endeavor geared towards duplicating the target model via query permissions.
no code implementations • 18 Dec 2023 • Zhihao Zhu, Rui Fan, Chenwang Wu, Yi Yang, Defu Lian, Enhong Chen
Some adversarial attacks have achieved model stealing attacks against recommender systems, to some extent, by collecting abundant training data of the target model (target data) or making a mass of queries.
no code implementations • 11 Dec 2023 • Ruimeng Li, Yuanhao Pu, Zhaoyi Li, Hong Xie, Defu Lian
This paper considers the out-of-distribution (OOD) generalization problem under the setting that both style distribution shift and spurious features exist and domain labels are missing.
no code implementations • 9 Dec 2023 • Qi Liu, Xuyang Hou, Defu Lian, Zhe Wang, Haoran Jin, Jia Cheng, Jun Lei
Most existing methods focus on the network architecture design of the CTR model for better accuracy and suffer from the data sparsity problem.
1 code implementation • 18 Nov 2023 • Yuxuan Lei, Jianxun Lian, Jing Yao, Xu Huang, Defu Lian, Xing Xie
Behavior alignment operates in the language space, representing user preferences and item information as text to mimic the target model's behavior; intention alignment works in the latent space of the recommendation model, using user and item representations to understand the model's behavior; hybrid alignment combines both language and latent spaces.
no code implementations • 15 Nov 2023 • Qi Liu, Xuyang Hou, Haoran Jin, Xiaolong Chen, Jin Chen, Defu Lian, Zhe Wang, Jia Cheng, Jun Lei
The insights from this subset reveal the user's decision-making process related to the candidate item, improving prediction accuracy.
2 code implementations • NeurIPS 2023 • Kun Yi, Qi Zhang, Wei Fan, Shoujin Wang, Pengyang Wang, Hui He, Defu Lian, Ning An, Longbing Cao, Zhendong Niu
FreTS mainly involves two stages, (i) Domain Conversion, that transforms time-domain signals into complex numbers of frequency domain; (ii) Frequency Learning, that performs our redesigned MLPs for the learning of real and imaginary part of frequency components.
1 code implementation • 6 Nov 2023 • Mingjia Yin, Hao Wang, Xiang Xu, Likang Wu, Sirui Zhao, Wei Guo, Yong liu, Ruiming Tang, Defu Lian, Enhong Chen
To this end, we propose a graph-driven framework, named Adaptive and Personalized Graph Learning for Sequential Recommendation (APGL4SR), that incorporates adaptive and personalized global collaborative information into sequential recommendation systems.
1 code implementation • CIKM 2023 • Yongfu Fan, Jin Chen, Yongquan Jiang, Defu Lian, Fangda Guo, Kai Zheng
Recommendation retrievers commonly retrieve user potentially preferred items from numerous items, where the query and item representation are learned according to the dual encoders with the log-softmax loss.
no code implementations • 20 Oct 2023 • Xu Huang, Jianxun Lian, Hao Wang, Defu Lian, Xing Xie
Recommendation systems effectively guide users in locating their desired information within extensive content repositories.
1 code implementation • 19 Oct 2023 • Gangwei Jiang, Caigao Jiang, Siqiao Xue, James Y. Zhang, Jun Zhou, Defu Lian, Ying WEI
In this work, we first investigate such anytime fine-tuning effectiveness of existing continual pre-training approaches, concluding with unanimously decreased performance on unseen domains.
no code implementations • 9 Oct 2023 • Zheli Xiong, Defu Lian, Enhong Chen, Gang Chen, Xiaomin Cheng
To alleviate this problem, some researchers incorporate a prior OD matrix as a target in the regression to provide more structural constraints.
no code implementations • 29 Sep 2023 • Yichang Xu, Chenwang Wu, Defu Lian
Recommender systems have been shown to be vulnerable to poisoning attacks, where malicious data is injected into the dataset to cause the recommender system to provide biased recommendations.
1 code implementation • 26 Sep 2023 • Zhihao Shi, Jie Wang, Fanghua Lu, Hanzhu Chen, Defu Lian, Zheng Wang, Jieping Ye, Feng Wu
The inverse mapping leads to an objective function that is equivalent to that by the joint training, while it can effectively incorporate GNNs in the training phase of NEs against the learning bias.
Ranked #1 on Node Property Prediction on ogbn-proteins
no code implementations • 20 Sep 2023 • Jie Wang, Hanzhu Chen, Qitan Lv, Zhihao Shi, Jiajun Chen, Huarui He, Hongtao Xie, Defu Lian, Enhong Chen, Feng Wu
This implies the great potential of the semantic correlations for the entity-independent inductive link prediction task.
2 code implementations • 14 Sep 2023 • Shitao Xiao, Zheng Liu, Peitian Zhang, Niklas Muennighoff, Defu Lian, Jian-Yun Nie
Along with our resources on general Chinese embedding, we release our data and models for English text embeddings.
no code implementations • 4 Sep 2023 • Jin Zhang, Defu Lian, Hong Xie, Yawen Li, Enhong Chen
Furthermore, we employ Bayesian meta-learning methods to effectively address the cold-start problem and derive theoretical regret bounds for our proposed method, ensuring a robust performance guarantee.
1 code implementation • 31 Aug 2023 • Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system.
no code implementations • 15 Aug 2023 • Likang Wu, Junji Jiang, Hongke Zhao, Hao Wang, Defu Lian, Mengdi Zhang, Enhong Chen
However, the multi-faceted semantic orientation in the feature-semantic alignment has been neglected by previous work, i. e. the content of a node usually covers diverse topics that are relevant to the semantics of multiple labels.
no code implementations • 11 Aug 2023 • Qi Liu, Zhilong Zhou, Gangwei Jiang, Tiezheng Ge, Defu Lian
In this paper, we focus on the bottom representation learning of MTL in RS and propose the Deep Task-specific Bottom Representation Network (DTRN) to alleviate the negative transfer problem.
no code implementations • 31 Jul 2023 • Jin Chen, Zheng Liu, Xu Huang, Chenwang Wu, Qi Liu, Gangwei Jiang, Yuanhao Pu, Yuxuan Lei, Xiaolong Chen, Xingmei Wang, Defu Lian, Enhong Chen
The advent of large language models marks a revolutionary breakthrough in artificial intelligence.
no code implementations • 11 Jul 2023 • Zheli Xiong, Defu Lian, Enhong Chen, Gang Chen, Xiaomin Cheng
To this end, this paper proposes an integrated method, which uses deep learning methods to infer the structure of OD sequence and uses structural constraints to guide traditional numerical optimization.
1 code implementation • 5 Jun 2023 • Zhaoyi Li, Ying WEI, Defu Lian
Despite the rising prevalence of neural sequence models, recent empirical evidences suggest their deficiency in compositional generalization.
no code implementations • 17 Mar 2023 • Jie Wang, Zhihao Shi, Xize Liang, Defu Lian, Shuiwang Ji, Bin Li, Enhong Chen, Feng Wu
During the message passing (MP) in GNNs, subgraph-wise sampling methods discard messages outside the mini-batches in backward passes to avoid the well-known neighbor explosion problem, i. e., the exponentially increasing dependencies of nodes with the number of MP iterations.
1 code implementation • 14 Mar 2023 • Moritz Neun, Christian Eichenberger, Henry Martin, Markus Spanring, Rahul Siripurapu, Daniel Springer, Leyan Deng, Chenwang Wu, Defu Lian, Min Zhou, Martin Lumiste, Andrei Ilie, Xinhua Wu, Cheng Lyu, Qing-Long Lu, Vishal Mahajan, Yichao Lu, Jiezhang Li, Junjun Li, Yue-Jiao Gong, Florian Grötschla, Joël Mathys, Ye Wei, He Haitao, Hui Fang, Kevin Malm, Fei Tang, Michael Kopp, David Kreil, Sepp Hochreiter
We only provide vehicle count data from spatially sparse stationary vehicle detectors in these three cities as model input for this task.
no code implementations • 1 Mar 2023 • Yongqiang Han, Likang Wu, Hao Wang, Guifeng Wang, Mengdi Zhang, Zhi Li, Defu Lian, Enhong Chen
Sequential Recommendation is a widely studied paradigm for learning users' dynamic interests from historical interactions for predicting the next potential item.
1 code implementation • 13 Feb 2023 • Lei Chen, Le Wu, Kun Zhang, Richang Hong, Defu Lian, Zhiqiang Zhang, Jun Zhou, Meng Wang
We augment imbalanced training data towards balanced data distribution to improve fairness.
no code implementations • 15 Nov 2022 • Zhihao Zhu, Chenwang Wu, Min Zhou, Hao Liao, Defu Lian, Enhong Chen
Recent studies show that Graph Neural Networks(GNNs) are vulnerable and easily fooled by small perturbations, which has raised considerable concerns for adapting GNNs in various safety-critical applications.
2 code implementations • 30 Oct 2022 • Leyan Deng, Chenwang Wu, Defu Lian, Min Zhou
In this technical report, we present our solutions to the Traffic4cast 2022 core challenge and extended challenge.
no code implementations • 25 Oct 2022 • Qingyang Wang, Defu Lian, Chenwang Wu, Enhong Chen
Notably, TCD adds pseudo label data instead of deleting abnormal data, which avoids the cleaning of normal data, and the cooperative training of the three models is also beneficial to model generalization.
1 code implementation • 28 Jun 2022 • Xu Huang, Defu Lian, Jin Chen, Zheng Liu, Xing Xie, Enhong Chen
Deep recommender systems (DRS) are intensively applied in modern web services.
1 code implementation • 17 Jun 2022 • Chenwang Wu, Defu Lian, Yong Ge, Min Zhou, Enhong Chen, DaCheng Tao
Second, considering that MixFM may generate redundant or even detrimental instances, we further put forward a novel Factorization Machine powered by Saliency-guided Mixup (denoted as SMFM).
no code implementations • 30 May 2022 • Jin Chen, Defu Lian, Yucheng Li, Baoyun Wang, Kai Zheng, Enhong Chen
Recommender retrievers aim to rapidly retrieve a fraction of items from the entire item corpus when a user query requests, with the representative two-tower model trained with the log softmax loss.
no code implementations • 27 Apr 2022 • Gangwei Jiang, Shiyao Wang, Tiezheng Ge, Yuning Jiang, Ying WEI, Defu Lian
The synthetic training images with erasure ground-truth are then fed to train a coarse-to-fine erasing network.
1 code implementation • 18 Apr 2022 • Bisheng Li, Min Zhou, Shengzhong Zhang, Menglin Yang, Defu Lian, Zengfeng Huang
Regarding missing link inference of diverse networks, we revisit the link prediction techniques and identify the importance of both the structural and attribute information.
1 code implementation • 18 Apr 2022 • Menglin Yang, Min Zhou, Jiahong Liu, Defu Lian, Irwin King
Hyperbolic space offers a spacious room to learn embeddings with its negative curvature and metric properties, which can well fit data with tree-like structures.
2 code implementations • 1 Apr 2022 • Shitao Xiao, Zheng Liu, Weihao Han, Jianjin Zhang, Defu Lian, Yeyun Gong, Qi Chen, Fan Yang, Hao Sun, Yingxia Shao, Denvy Deng, Qi Zhang, Xing Xie
We perform comprehensive explorations for the optimal conduct of knowledge distillation, which may provide useful insights for the learning of VQ based ANN index.
no code implementations • Findings (ACL) 2022 • Jiannan Xiang, Huayang Li, Yahui Liu, Lemao Liu, Guoping Huang, Defu Lian, Shuming Shi
Current practices in metric evaluation focus on one single dataset, e. g., Newstest dataset in each year's WMT Metrics Shared Task.
no code implementations • Findings (ACL) 2022 • Jiannan Xiang, Huayang Li, Defu Lian, Guoping Huang, Taro Watanabe, Lemao Liu
To this end, we study the dynamic relationship between the encoded linguistic information and task performance from the viewpoint of Pareto Optimality.
no code implementations • 28 Feb 2022 • Junhan Yang, Zheng Liu, Shitao Xiao, Jianxun Lian, Lijun Wu, Defu Lian, Guangzhong Sun, Xing Xie
Instead of relying on annotation heuristics defined by humans, it leverages the sentence representation model itself and realizes the following iterative self-supervision process: on one hand, the improvement of sentence representation may contribute to the quality of data annotation; on the other hand, more effective data annotation helps to generate high-quality positive samples, which will further improve the current sentence representation model.
no code implementations • 23 Jan 2022 • Chao Feng, Defu Lian, Xiting Wang, Zheng Liu, Xing Xie, Enhong Chen
Instead of searching the nearest neighbor for the query, we search the item with maximum inner product with query on the proximity graph.
2 code implementations • 14 Jan 2022 • Shitao Xiao, Zheng Liu, Weihao Han, Jianjin Zhang, Yingxia Shao, Defu Lian, Chaozhuo Li, Hao Sun, Denvy Deng, Liangjie Zhang, Qi Zhang, Xing Xie
In this work, we tackle this problem with Bi-Granular Document Representation, where the lightweight sparse embeddings are indexed and standby in memory for coarse-grained candidate search, and the heavyweight dense embeddings are hosted in disk for fine-grained post verification.
2 code implementations • NeurIPS 2021 • Huaxiu Yao, Yu Wang, Ying WEI, Peilin Zhao, Mehrdad Mahdavi, Defu Lian, Chelsea Finn
In ATS, for the first time, we design a neural scheduler to decide which meta-training tasks to use next by predicting the probability being sampled for each candidate task, and train the scheduler to optimize the generalization capacity of the meta-model to unseen tasks.
no code implementations • 29 Sep 2021 • Daoyuan Chen, Wuchao Li, Yaliang Li, Bolin Ding, Kai Zeng, Defu Lian, Jingren Zhou
We theoretically analyze prediction error bounds that link $\epsilon$ with data characteristics for an illustrative learned index method.
1 code implementation • 13 Sep 2021 • Jin Chen, Defu Lian, Binbin Jin, Xu Huang, Kai Zheng, Enhong Chen
Variational AutoEncoder (VAE) has been extended as a representative nonlinear method for collaborative filtering.
no code implementations • 28 May 2021 • Yongji Wu, Lu Yin, Defu Lian, Mingyang Yin, Neil Zhenqiang Gong, Jingren Zhou, Hongxia Yang
With the rapid development of these services in the last two decades, users have accumulated a massive amount of behavior data.
1 code implementation • 28 May 2021 • Yongji Wu, Defu Lian, Neil Zhenqiang Gong, Lu Yin, Mingyang Yin, Jingren Zhou, Hongxia Yang
Inspired by the idea of vector quantization that uses cluster centroids to approximate items, we propose LISA (LInear-time Self Attention), which enjoys both the effectiveness of vanilla self-attention and the efficiency of sparse attention.
1 code implementation • NeurIPS 2021 • Junhan Yang, Zheng Liu, Shitao Xiao, Chaozhuo Li, Defu Lian, Sanjay Agrawal, Amit Singh, Guangzhong Sun, Xing Xie
The representation learning on textual graph is to generate low-dimensional embeddings for the nodes based on the individual textual features and the neighbourhood information.
1 code implementation • Findings (ACL) 2021 • Jiannan Xiang, Yahui Liu, Deng Cai, Huayang Li, Defu Lian, Lemao Liu
An important aspect of developing dialogue systems is how to evaluate and compare the performance of different systems.
1 code implementation • 6 May 2021 • Ziniu Wu, Pei Yu, Peilun Yang, Rong Zhu, Yuxing Han, Yaliang Li, Defu Lian, Kai Zeng, Jingren Zhou
We propose to explore the transferabilities of the ML methods both across tasks and across DBs to tackle these fundamental drawbacks.
no code implementations • 22 Apr 2021 • Junhan Yang, Zheng Liu, Bowen Jin, Jianxun Lian, Defu Lian, Akshay Soni, Eun Yong Kang, Yajun Wang, Guangzhong Sun, Xing Xie
For the sake of efficient recommendation, conventional methods would generate user and advertisement embeddings independently with a siamese transformer encoder, such that approximate nearest neighbour search (ANN) can be leveraged.
2 code implementations • 16 Apr 2021 • Shitao Xiao, Zheng Liu, Yingxia Shao, Defu Lian, Xing Xie
In this work, we propose the Matching-oriented Product Quantization (MoPQ), where a novel objective Multinoulli Contrastive Loss (MCL) is formulated.
1 code implementation • 2 Mar 2021 • Jin Chen, Tiezheng Ge, Gangwei Jiang, Zhiqiang Zhang, Defu Lian, Kai Zheng
Based on the tree structure, Thompson sampling is adapted with dynamic programming, leading to efficient exploration for potential ad creatives with the largest CTR.
1 code implementation • 28 Feb 2021 • Jin Chen, Ju Xu, Gangwei Jiang, Tiezheng Ge, Zhiqiang Zhang, Defu Lian, Kai Zheng
However, interactions between creative elements may be more complex than the inner product, and the FM-estimated CTR may be of high variance due to limited feedback.
1 code implementation • 8 Feb 2021 • Shiyao Wang, Qi Liu, Tiezheng Ge, Defu Lian, Zhiqiang Zhang
Creative plays a great important role in e-commerce for exhibiting products.
no code implementations • 2 Nov 2020 • Yan Zhang, Ivor W. Tsang, Hongzhi Yin, Guowu Yang, Defu Lian, Jingjing Li
Specifically, we first pre-train robust item representation from item content data by a Denoising Auto-encoder instead of other deterministic deep learning frameworks; then we finetune the entire framework by adding a pairwise loss objective with discrete constraints; moreover, DPH aims to minimize a pairwise ranking loss that is consistent with the ultimate goal of recommendation.
no code implementations • NeurIPS 2020 • Binbin Jin, Defu Lian, Zheng Liu, Qi Liu, Jianhui Ma, Xing Xie, Enhong Chen
The GAN-style recommenders (i. e., IRGAN) addresses the challenge by learning a generator and a discriminator adversarially, such that the generator produces increasingly difficult samples for the discriminator to accelerate optimizing the discrimination objective.
no code implementations • 24 May 2020 • Le Wu, Yonghui Yang, Lei Chen, Defu Lian, Richang Hong, Meng Wang
The transfer network is designed to approximate the learned item embeddings from graph neural networks by taking each item's visual content as input, in order to tackle the new segment problem in the test phase.
1 code implementation • 12 May 2020 • Hanchen Wang, Defu Lian, Ying Zhang, Lu Qin, Xuemin Lin
We observe that existing works on structured entity interaction prediction cannot properly exploit the unique graph of graphs model.
1 code implementation • International World Wide Web Conference 2020 • Defu Lian, Haoyu Wang, Zheng Liu, Jianxun Lian, Enhong Chen, Xing Xie
On top of such a structure, LightRec will have an item represented as additive composition of B codewords, which are optimally selected from each of the codebooks.
no code implementations • 19 Apr 2020 • Hanchen Wang, Defu Lian, Ying Zhang, Lu Qin, Xiangjian He, Yiguang Lin, Xuemin Lin
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches to binarize the model parameters and learn the compact embedding.
no code implementations • 15 Jul 2019 • Zheng Liu, Yu Xing, Jianxun Lian, Defu Lian, Ziyao Li, Xing Xie
Our work is undergoing a anonymous review, and it will soon be released after the notification.
no code implementations • 5 Jun 2019 • Haoyu Wang, Defu Lian, Yong Ge
Then we distill the ranking information derived from GCN into binarized collaborative filtering, which makes use of binary representation to improve the efficiency of online recommendation.
no code implementations • 27 May 2019 • Hao Wang, Tong Xu, Qi Liu, Defu Lian, Enhong Chen, Dongfang Du, Han Wu, Wen Su
Recently, the Network Representation Learning (NRL) techniques, which represent graph structure via low-dimension vectors to support social-oriented application, have attracted wide attention.
1 code implementation • 13 Feb 2019 • Shoujin Wang, Longbing Cao, Yan Wang, Quan Z. Sheng, Mehmet Orgun, Defu Lian
In recent years, session-based recommender systems (SBRSs) have emerged as a new paradigm of RSs.
2 code implementations • ICDM 2018 • Hong Yang, Shirui Pan, Peng Zhang, Ling Chen, Defu Lian, Chengqi Zhang
To this end, we present a Binarized Attributed Network Embedding model (BANE for short) to learn binary node representation.
Ranked #1 on Link Prediction on Wiki