Search Results for author: Ruobing Xie

Found 98 papers, 55 papers with code

PatchRec: Multi-Grained Patching for Efficient LLM-based Sequential Recommendation

no code implementations25 Jan 2025 Jiayi Liao, Ruobing Xie, Sihang Li, Xiang Wang, Xingwu Sun, Zhanhui Kang, Xiangnan He

The framework consists of two stages: (1) Patch Pre-training, which familiarizes LLMs with item-level compression patterns, and (2) Patch Fine-tuning, which teaches LLMs to model sequences at multiple granularities.

Language Modeling Language Modelling +1

Autonomy-of-Experts Models

no code implementations22 Jan 2025 Ang Lv, Ruobing Xie, Yining Qian, Songhao Wu, Xingwu Sun, Zhanhui Kang, Di Wang, Rui Yan

We argue that the separation between the router's decision-making and the experts' execution is a critical yet overlooked issue, leading to suboptimal expert selection and ineffective learning.

Decision Making

Enhancing Contrastive Learning Inspired by the Philosophy of "The Blind Men and the Elephant"

1 code implementation21 Dec 2024 Yudong Zhang, Ruobing Xie, Jiansheng Chen, Xingwu Sun, Zhanhui Kang, Yu Wang

Contrastive learning is a prevalent technique in self-supervised vision representation learning, typically generating positive pairs by applying two data augmentations to the same image.

Contrastive Learning Data Augmentation +2

More Expressive Attention with Negative Weights

1 code implementation11 Nov 2024 Ang Lv, Ruobing Xie, Shuaipeng Li, Jiayi Liao, Xingwu Sun, Zhanhui Kang, Di Wang, Rui Yan

We propose a novel attention mechanism, named Cog Attention, that enables attention weights to be negative for enhanced expressiveness, which stems from two key factors: (1) Cog Attention enhances parameter flexibility.

Decoder Image Generation +2

Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent

3 code implementations4 Nov 2024 Xingwu Sun, Yanfeng Chen, Yiqing Huang, Ruobing Xie, Jiaqi Zhu, Kai Zhang, Shuaipeng Li, Zhen Yang, Jonny Han, Xiaobo Shu, Jiahao Bu, Zhongzhi Chen, Xuemeng Huang, Fengzong Lian, Saiyong Yang, Jianfeng Yan, Yuyuan Zeng, Xiaoqin Ren, Chao Yu, Lulu Wu, Yue Mao, Jun Xia, Tao Yang, Suncong Zheng, Kan Wu, Dian Jiao, Jinbao Xue, Xipeng Zhang, Decheng Wu, Kai Liu, Dengpeng Wu, Guanghui Xu, Shaohua Chen, Shuang Chen, Xiao Feng, Yigeng Hong, Junqiang Zheng, Chengcheng Xu, Zongwei Li, Xiong Kuang, Jianglu Hu, Yiqi Chen, Yuchi Deng, Guiyang Li, Ao Liu, Chenchen Zhang, Shihui Hu, Zilong Zhao, Zifan Wu, Yao Ding, Weichao Wang, Han Liu, Roberts Wang, Hao Fei, Peijie Yu, Ze Zhao, Xun Cao, Hai Wang, Fusheng Xiang, Mengyuan Huang, Zhiyuan Xiong, Bin Hu, Xuebin Hou, Lei Jiang, Jianqiang Ma, Jiajia Wu, Yaping Deng, Yi Shen, Qian Wang, Weijie Liu, Jie Liu, Meng Chen, Liang Dong, Weiwen Jia, Hu Chen, Feifei Liu, Rui Yuan, Huilin Xu, Zhenxiang Yan, Tengfei Cao, Zhichao Hu, Xinhua Feng, Dong Du, TingHao Yu, Yangyu Tao, Feng Zhang, Jianchen Zhu, Chengzhong Xu, Xirui Li, Chong Zha, Wen Ouyang, Yinben Xia, Xiang Li, Zekun He, Rongpeng Chen, Jiawei Song, Ruibin Chen, Fan Jiang, Chongqing Zhao, Bo wang, Hao Gong, Rong Gan, Winston Hu, Zhanhui Kang, Yong Yang, Yuhong Liu, Di Wang, Jie Jiang

In this paper, we introduce Hunyuan-Large, which is currently the largest open-source Transformer-based mixture of experts model, with a total of 389 billion parameters and 52 billion activation parameters, capable of handling up to 256K tokens.

Logical Reasoning Mathematical Problem-Solving

FairDgcl: Fairness-aware Recommendation with Dynamic Graph Contrastive Learning

1 code implementation23 Oct 2024 Wei Chen, Meng Yuan, Zhao Zhang, Ruobing Xie, Fuzhen Zhuang, Deqing Wang, Rui Liu

Specifically, we propose FairDgcl, a dynamic graph adversarial contrastive learning framework aiming at improving fairness in recommender system.

Contrastive Learning Data Augmentation +2

Exploring Forgetting in Large Language Model Pre-Training

no code implementations22 Oct 2024 Chonghua Liao, Ruobing Xie, Xingwu Sun, Haowen Sun, Zhanhui Kang

Catastrophic forgetting remains a formidable obstacle to building an omniscient model in large language models (LLMs).

Language Modeling Language Modelling +2

Continuous Speech Tokenizer in Text To Speech

no code implementations22 Oct 2024 Yixing Li, Ruobing Xie, Xingwu Sun, Yu Cheng, Zhanhui Kang

Our results show that the speech language model based on the continuous speech tokenizer has better continuity and higher estimated Mean Opinion Scores (MoS).

Language Modeling Language Modelling +1

Lossless KV Cache Compression to 2%

no code implementations20 Oct 2024 Zhen Yang, J. N. Han, Kan Wu, Ruobing Xie, An Wang, Xingwu Sun, Zhanhui Kang

Large language models have revolutionized data processing in numerous domains, with their ability to handle extended context reasoning receiving notable recognition.

Dimensionality Reduction Quantization

RosePO: Aligning LLM-based Recommenders with Human Values

no code implementations16 Oct 2024 Jiayi Liao, Xiangnan He, Ruobing Xie, Jiancan Wu, Yancheng Yuan, Xingwu Sun, Zhanhui Kang, Xiang Wang

Recently, there has been a growing interest in leveraging Large Language Models (LLMs) for recommendation systems, which usually adapt a pre-trained LLM to the recommendation scenario through supervised fine-tuning (SFT).

Hallucination Recommendation Systems

Multimodal Clickbait Detection by De-confounding Biases Using Causal Representation Inference

no code implementations10 Oct 2024 Jianxing Yu, Shiqi Wang, Han Yin, Zhenlong Sun, Ruobing Xie, Bo Zhang, Yanghui Rao

Considering these features are often mixed up with unknown biases, we then disentangle three kinds of latent factors from them, including the invariant factor that indicates intrinsic bait intention; the causal factor which reflects deceptive patterns in a certain scenario, and non-causal noise.

Causal Inference Clickbait Detection

Exploring the Benefit of Activation Sparsity in Pre-training

1 code implementation4 Oct 2024 Zhengyan Zhang, Chaojun Xiao, Qiujieli Qin, Yankai Lin, Zhiyuan Zeng, Xu Han, Zhiyuan Liu, Ruobing Xie, Maosong Sun, Jie zhou

SSD adaptively switches between the Mixtures-of-Experts (MoE) based sparse training and the conventional dense training during the pre-training process, leveraging the efficiency of sparse training and avoiding the static activation correlation of sparse training.

Language Models "Grok" to Copy

no code implementations14 Sep 2024 Ang Lv, Ruobing Xie, Xingwu Sun, Zhanhui Kang, Rui Yan

We examine the pre-training dynamics of language models, focusing on their ability to copy text from preceding context--a fundamental skill for various LLM applications, including in-context learning (ICL) and retrieval-augmented generation (RAG).

In-Context Learning Language Modelling +1

Negative Sampling in Recommendation: A Survey and Future Directions

no code implementations11 Sep 2024 Haokai Ma, Ruobing Xie, Lei Meng, Fuli Feng, Xiaoyu Du, Xingwu Sun, Zhanhui Kang, Xiangxu Meng

Recommender systems aim to capture users' personalized preferences from the cast amount of user behaviors, making them pivotal in the era of information explosion.

Recommendation Systems Survey

PIP: Detecting Adversarial Examples in Large Vision-Language Models via Attention Patterns of Irrelevant Probe Questions

1 code implementation8 Sep 2024 Yudong Zhang, Ruobing Xie, Jiansheng Chen, Xingwu Sun, Yu Wang

We propose an unconventional method named PIP, which utilizes the attention patterns of one randomly selected irrelevant probe question (e. g., "Is there a clock?")

HMoE: Heterogeneous Mixture of Experts for Language Modeling

no code implementations20 Aug 2024 An Wang, Xingwu Sun, Ruobing Xie, Shuaipeng Li, Jiaqi Zhu, Zhen Yang, Pinxue Zhao, J. N. Han, Zhanhui Kang, Di Wang, Naoaki Okazaki, Cheng-Zhong Xu

To address the imbalance in expert activation, we propose a novel training objective that encourages the frequent activation of smaller experts, enhancing computational efficiency and parameter utilization.

Computational Efficiency Language Modeling +1

Modeling Domain and Feedback Transitions for Cross-Domain Sequential Recommendation

no code implementations15 Aug 2024 Changshuo Zhang, Teng Shi, Xiao Zhang, Qi Liu, Ruobing Xie, Jun Xu, Ji-Rong Wen

In this paper, we propose $\text{Transition}^2$, a novel method to model transitions across both domains and types of user feedback.

Sequential Recommendation

Internet of Agents: Weaving a Web of Heterogeneous Agents for Collaborative Intelligence

1 code implementation9 Jul 2024 Weize Chen, Ziming You, Ran Li, Yitong Guan, Chen Qian, Chenyang Zhao, Cheng Yang, Ruobing Xie, Zhiyuan Liu, Maosong Sun

The rapid advancement of large language models (LLMs) has paved the way for the development of highly capable autonomous agents.

Improving Multi-modal Recommender Systems by Denoising and Aligning Multi-modal Content and User Feedback

1 code implementation18 Jun 2024 Guipeng Xv, Xinyu Li, Ruobing Xie, Chen Lin, Chong Liu, Feng Xia, Zhanhui Kang, Leyu Lin

Multi-modal recommender systems (MRSs) are pivotal in diverse online web platforms and have garnered considerable attention in recent years.

Denoising Recommendation Systems

QAGCF: Graph Collaborative Filtering for Q&A Recommendation

no code implementations7 Jun 2024 Changshuo Zhang, Teng Shi, Xiao Zhang, Yanping Zheng, Ruobing Xie, Qi Liu, Jun Xu, Ji-Rong Wen

Traditional recommendation methods treat the question-answer pair as a whole or only consider the answer as a single item, which overlooks the two challenges and cannot effectively model user interests.

Collaborative Filtering Contrastive Learning +1

DFGNN: Dual-frequency Graph Neural Network for Sign-aware Feedback

no code implementations24 May 2024 Yiqing Wu, Ruobing Xie, Zhao Zhang, Xu Zhang, Fuzhen Zhuang, Leyu Lin, Zhanhui Kang, Yongjun Xu

Based on the two observations, we propose a novel model that models positive and negative feedback from a frequency filter perspective called Dual-frequency Graph Neural Network for Sign-aware Recommendation (DFGNN).

Graph Neural Network

ID-centric Pre-training for Recommendation

no code implementations6 May 2024 Yiqing Wu, Ruobing Xie, Zhao Zhang, Fuzhen Zhuang, Xu Zhang, Leyu Lin, Zhanhui Kang, Yongjun Xu

Specifically, in pre-training stage, besides the ID-based sequential model for recommendation, we also build a Cross-domain ID-matcher (CDIM) learned by both behavioral and modality information.

Language Modelling Sequential Recommendation

PhD: A ChatGPT-Prompted Visual hallucination Evaluation Dataset

1 code implementation17 Mar 2024 Jiazhen Liu, Yuhan Fu, Ruobing Xie, Runquan Xie, Xingwu Sun, Fengzong Lian, Zhanhui Kang, Xirong Li

This paper contributes a ChatGPT-Prompted visual hallucination evaluation Dataset (PhD) for objective VHE at a large scale.

Attribute Common Sense Reasoning +4

Mastering Text, Code and Math Simultaneously via Fusing Highly Specialized Language Models

no code implementations13 Mar 2024 Ning Ding, Yulin Chen, Ganqu Cui, Xingtai Lv, Weilin Zhao, Ruobing Xie, BoWen Zhou, Zhiyuan Liu, Maosong Sun

Underlying data distributions of natural language, programming code, and mathematical symbols vary vastly, presenting a complex challenge for large language models (LLMs) that strive to achieve high performance across all three domains simultaneously.

Math

Controllable Preference Optimization: Toward Controllable Multi-Objective Alignment

1 code implementation29 Feb 2024 Yiju Guo, Ganqu Cui, Lifan Yuan, Ning Ding, Zexu Sun, Bowen Sun, Huimin Chen, Ruobing Xie, Jie zhou, Yankai Lin, Zhiyuan Liu, Maosong Sun

In practice, the multifaceted nature of human preferences inadvertently introduces what is known as the "alignment tax" -a compromise where enhancements in alignment within one objective (e. g., harmlessness) can diminish performance in others (e. g., helpfulness).

Navigate

Beyond Natural Language: LLMs Leveraging Alternative Formats for Enhanced Reasoning and Communication

1 code implementation28 Feb 2024 Weize Chen, Chenfei Yuan, Jiarui Yuan, Yusheng Su, Chen Qian, Cheng Yang, Ruobing Xie, Zhiyuan Liu, Maosong Sun

Natural language (NL) has long been the predominant format for human cognition and communication, and by extension, has been similarly pivotal in the development and application of Large Language Models (LLMs).

Plug-in Diffusion Model for Sequential Recommendation

1 code implementation5 Jan 2024 Haokai Ma, Ruobing Xie, Lei Meng, Xin Chen, Xu Zhang, Leyu Lin, Zhanhui Kang

To address this issue, this paper presents a novel Plug-in Diffusion Model for Recommendation (PDRec) framework, which employs the diffusion model as a flexible plugin to jointly take full advantage of the diffusion-generating user preferences on all items.

Image Generation model +2

MAVEN-Arg: Completing the Puzzle of All-in-One Event Understanding Dataset with Event Argument Annotation

1 code implementation15 Nov 2023 Xiaozhi Wang, Hao Peng, Yong Guan, Kaisheng Zeng, Jianhui Chen, Lei Hou, Xu Han, Yankai Lin, Zhiyuan Liu, Ruobing Xie, Jie zhou, Juanzi Li

Understanding events in texts is a core objective of natural language understanding, which requires detecting event occurrences, extracting event arguments, and analyzing inter-event relationships.

All Event Argument Extraction +4

Universal Multi-modal Multi-domain Pre-trained Recommendation

no code implementations3 Nov 2023 Wenqi Sun, Ruobing Xie, Shuqing Bian, Wayne Xin Zhao, Jie zhou

There is a rapidly-growing research interest in modeling user preferences via pre-training multi-domain interactions for recommender systems.

Recommendation Systems

Variator: Accelerating Pre-trained Models with Plug-and-Play Compression Modules

1 code implementation24 Oct 2023 Chaojun Xiao, Yuqi Luo, Wenbin Zhang, Pengle Zhang, Xu Han, Yankai Lin, Zhengyan Zhang, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie zhou

Pre-trained language models (PLMs) have achieved remarkable results on NLP tasks but at the expense of huge parameter sizes and the consequent computational costs.

Computational Efficiency

Thoroughly Modeling Multi-domain Pre-trained Recommendation as Language

no code implementations20 Oct 2023 Zekai Qu, Ruobing Xie, Chaojun Xiao, Yuan YAO, Zhiyuan Liu, Fengzong Lian, Zhanhui Kang, Jie zhou

With the thriving of pre-trained language model (PLM) widely verified in various of NLP tasks, pioneer efforts attempt to explore the possible cooperation of the general textual information in PLM with the personalized behavioral information in user historical behavior sequences to enhance sequential recommendation (SR).

Informativeness Language Modeling +2

Boosting Inference Efficiency: Unleashing the Power of Parameter-Shared Pre-trained Language Models

no code implementations19 Oct 2023 Weize Chen, Xiaoyue Xu, Xu Han, Yankai Lin, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie zhou

Parameter-shared pre-trained language models (PLMs) have emerged as a successful approach in resource-constrained environments, enabling substantial reductions in model storage and memory costs without significant performance compromise.

AgentCF: Collaborative Learning with Autonomous Language Agents for Recommender Systems

no code implementations13 Oct 2023 Junjie Zhang, Yupeng Hou, Ruobing Xie, Wenqi Sun, Julian McAuley, Wayne Xin Zhao, Leyu Lin, Ji-Rong Wen

The optimized agents can also propagate their preferences to other agents in subsequent interactions, implicitly capturing the collaborative filtering idea.

Collaborative Filtering Decision Making +3

UltraFeedback: Boosting Language Models with Scaled AI Feedback

4 code implementations2 Oct 2023 Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao, Bingxiang He, Wei Zhu, Yuan Ni, Guotong Xie, Ruobing Xie, Yankai Lin, Zhiyuan Liu, Maosong Sun

Our work validates the effectiveness of scaled AI feedback data in constructing strong open-source chat language models, serving as a solid foundation for future feedback learning research.

Language Modelling

AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors

1 code implementation21 Aug 2023 Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie zhou

Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks.

Learning from All Sides: Diversified Positive Augmentation via Self-distillation in Recommendation

no code implementations15 Aug 2023 Chong Liu, Xiaoyang Liu, Ruobing Xie, Lixin Zhang, Feng Xia, Leyu Lin

A powerful positive item augmentation is beneficial to address the sparsity issue, while few works could jointly consider both the accuracy and diversity of these augmented training labels.

All Diversity +2

Emergent Modularity in Pre-trained Transformers

1 code implementation28 May 2023 Zhengyan Zhang, Zhiyuan Zeng, Yankai Lin, Chaojun Xiao, Xiaozhi Wang, Xu Han, Zhiyuan Liu, Ruobing Xie, Maosong Sun, Jie zhou

In analogy to human brains, we consider two main characteristics of modularity: (1) functional specialization of neurons: we evaluate whether each neuron is mainly specialized in a certain function, and find that the answer is yes.

Large Language Models are Zero-Shot Rankers for Recommender Systems

2 code implementations15 May 2023 Yupeng Hou, Junjie Zhang, Zihan Lin, Hongyu Lu, Ruobing Xie, Julian McAuley, Wayne Xin Zhao

Recently, large language models (LLMs) (e. g., GPT-4) have demonstrated impressive general-purpose task-solving abilities, including the potential to approach recommendation tasks.

Recommendation Systems

Recyclable Tuning for Continual Pre-training

1 code implementation15 May 2023 Yujia Qin, Cheng Qian, Xu Han, Yankai Lin, Huadong Wang, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie zhou

In pilot studies, we find that after continual pre-training, the upgraded PLM remains compatible with the outdated adapted weights to some extent.

Recommendation as Instruction Following: A Large Language Model Empowered Recommendation Approach

no code implementations11 May 2023 Junjie Zhang, Ruobing Xie, Yupeng Hou, Wayne Xin Zhao, Leyu Lin, Ji-Rong Wen

Inspired by the recent progress on large language models (LLMs), we take a different approach to developing the recommendation models, considering recommendation as instruction following by LLMs.

Instruction Following Language Modeling +3

Attacking Pre-trained Recommendation

1 code implementation6 May 2023 Yiqing Wu, Ruobing Xie, Zhao Zhang, Yongchun Zhu, Fuzhen Zhuang, Jie zhou, Yongjun Xu, Qing He

Recently, a series of pioneer studies have shown the potency of pre-trained models in sequential recommendation, illuminating the path of building an omniscient unified pre-trained recommendation model for different downstream recommendation tasks.

Sequential Recommendation

Triple Sequence Learning for Cross-domain Recommendation

no code implementations11 Apr 2023 Haokai Ma, Ruobing Xie, Lei Meng, Xin Chen, Xu Zhang, Leyu Lin, Jie zhou

To address this issue, we present a novel framework, termed triple sequence learning for cross-domain recommendation (Tri-CDR), which jointly models the source, target, and mixed behavior sequences to highlight the global and target preference and precisely model the triple correlation in CDR.

Contrastive Learning

A Survey on Causal Inference for Recommendation

1 code implementation21 Mar 2023 Huishi Luo, Fuzhen Zhuang, Ruobing Xie, HengShu Zhu, Deqing Wang, Zhulin An, Yongjun Xu

Considering RS researchers' unfamiliarity with causality, it is necessary yet challenging to comprehensively review relevant studies from a coherent causal theoretical perspective, thereby facilitating a deeper integration of causal inference in RS.

Causal Inference counterfactual +3

Adversarial Learning Data Augmentation for Graph Contrastive Learning in Recommendation

1 code implementation5 Feb 2023 JunJie Huang, Qi Cao, Ruobing Xie, Shaoliang Zhang, Feng Xia, HuaWei Shen, Xueqi Cheng

To reduce the influence of data sparsity, Graph Contrastive Learning (GCL) is adopted in GNN-based CF methods for enhancing performance.

Contrastive Learning Data Augmentation

Visually Grounded Commonsense Knowledge Acquisition

1 code implementation22 Nov 2022 Yuan YAO, Tianyu Yu, Ao Zhang, Mengdi Li, Ruobing Xie, Cornelius Weber, Zhiyuan Liu, Hai-Tao Zheng, Stefan Wermter, Tat-Seng Chua, Maosong Sun

In this work, we present CLEVER, which formulates CKE as a distantly supervised multi-instance learning problem, where models learn to summarize commonsense relations from a bag of images about an entity pair without any human annotation on image instances.

Language Modelling

Pruning Pre-trained Language Models Without Fine-Tuning

1 code implementation12 Oct 2022 Ting Jiang, Deqing Wang, Fuzhen Zhuang, Ruobing Xie, Feng Xia

These methods, such as movement pruning, use first-order information to prune PLMs while fine-tuning the remaining weights.

Better Pre-Training by Reducing Representation Confusion

no code implementations9 Oct 2022 Haojie Zhang, Mingfei Liang, Ruobing Xie, Zhenlong Sun, Bo Zhang, Leyu Lin

Motivated by the above investigation, we propose two novel techniques to improve pre-trained language models: Decoupled Directional Relative Position (DDRP) encoding and MTH pre-training objective.

Language Modeling Language Modelling +2

Reweighting Clicks with Dwell Time in Recommendation

no code implementations19 Sep 2022 Ruobing Xie, Lin Ma, Shaoliang Zhang, Feng Xia, Leyu Lin

Precisely, we first define a new behavior named valid read, which helps to select high-quality click instances for different users and items via dwell time.

valid

Multi-granularity Item-based Contrastive Recommendation

no code implementations4 Jul 2022 Ruobing Xie, Zhijie Qiu, Bo Zhang, Leyu Lin

Specifically, we build three item-based CL tasks as a set of plug-and-play auxiliary objectives to capture item correlations in feature, semantic and session levels.

Contrastive Learning Recommendation Systems +1

Customized Conversational Recommender Systems

no code implementations30 Jun 2022 Shuokai Li, Yongchun Zhu, Ruobing Xie, Zhenwei Tang, Zhao Zhang, Fuzhen Zhuang, Qing He, Hui Xiong

In this paper, we propose two key points for CRS to improve the user experience: (1) Speaking like a human, human can speak with different styles according to the current dialogue context.

Meta-Learning Recommendation Systems

Prompt Tuning for Discriminative Pre-trained Language Models

1 code implementation Findings (ACL) 2022 Yuan YAO, Bowen Dong, Ao Zhang, Zhengyan Zhang, Ruobing Xie, Zhiyuan Liu, Leyu Lin, Maosong Sun, Jianyong Wang

Recent works have shown promising results of prompt tuning in stimulating pre-trained language models (PLMs) for natural language processing (NLP) tasks.

Language Modeling Language Modelling +3

Personalized Prompt for Sequential Recommendation

no code implementations19 May 2022 Yiqing Wu, Ruobing Xie, Yongchun Zhu, Fuzhen Zhuang, Xu Zhang, Leyu Lin, Qing He

Specifically, we build the personalized soft prefix prompt via a prompt generator based on user profiles and enable a sufficient training of prompts via a prompt-oriented contrastive learning with both prompt- and behavior-based augmentations.

Contrastive Learning Sequential Recommendation

Selective Fairness in Recommendation via Prompts

1 code implementation10 May 2022 Yiqing Wu, Ruobing Xie, Yongchun Zhu, Fuzhen Zhuang, Xiang Ao, Xu Zhang, Leyu Lin, Qing He

In this work, we define the selective fairness task, where users can flexibly choose which sensitive attributes should the recommendation model be bias-free.

Attribute Fairness +1

User-Centric Conversational Recommendation with Multi-Aspect User Modeling

1 code implementation20 Apr 2022 Shuokai Li, Ruobing Xie, Yongchun Zhu, Xiang Ao, Fuzhen Zhuang, Qing He

In this work, we highlight that the user's historical dialogue sessions and look-alike users are essential sources of user preferences besides the current dialogue session in CRS.

Ranked #3 on Recommendation Systems on ReDial (Recall@50 metric)

Conversational Recommendation Dialogue Generation +2

Multi-view Multi-behavior Contrastive Learning in Recommendation

1 code implementation20 Mar 2022 Yiqing Wu, Ruobing Xie, Yongchun Zhu, Xiang Ao, Xin Chen, Xu Zhang, Fuzhen Zhuang, Leyu Lin, Qing He

We argue that MBR models should: (1) model the coarse-grained commonalities between different behaviors of a user, (2) consider both individual sequence view and global graph view in multi-behavior modeling, and (3) capture the fine-grained differences between multiple behaviors of a user.

Contrastive Learning

Contrastive Cross-domain Recommendation in Matching

1 code implementation2 Dec 2021 Ruobing Xie, Qi Liu, Liangdong Wang, Shukai Liu, Bo Zhang, Leyu Lin

Cross-domain recommendation (CDR) aims to provide better recommendation results in the target domain with the help of the source domain, which is widely used and explored in real-world systems.

Contrastive Learning Representation Learning +1

Curriculum Disentangled Recommendation with Noisy Multi-feedback

1 code implementation NeurIPS 2021 Hong Chen, Yudong Chen, Xin Wang, Ruobing Xie, Rui Wang, Feng Xia, Wenwu Zhu

However, learning such disentangled representations from multi-feedback data is challenging because i) multi-feedback is complex: there exist complex relations among different types of feedback (e. g., click, unclick, and dislike, etc) as well as various user intentions, and ii) multi-feedback is noisy: there exists noisy (useless) information both in features and labels, which may deteriorate the recommendation performance.

Denoising Representation Learning

MIC: Model-agnostic Integrated Cross-channel Recommenders

no code implementations22 Oct 2021 Yujie Lu, Ping Nie, Shengyu Zhang, Ming Zhao, Ruobing Xie, William Yang Wang, Yi Ren

However, existing work are primarily built upon pre-defined retrieval channels, including User-CF (U2U), Item-CF (I2I), and Embedding-based Retrieval (U2I), thus access to the limited correlation between users and items which solely entail from partial information of latent interactions.

model Recommendation Systems +3

Personalized Transfer of User Preferences for Cross-domain Recommendation

1 code implementation21 Oct 2021 Yongchun Zhu, Zhenwei Tang, Yudan Liu, Fuzhen Zhuang, Ruobing Xie, Xu Zhang, Leyu Lin, Qing He

Specifically, a meta network fed with users' characteristic embeddings is learned to generate personalized bridge functions to achieve personalized transfer of preferences for each user.

Recommendation Systems

Open Hierarchical Relation Extraction

1 code implementation NAACL 2021 Kai Zhang, Yuan YAO, Ruobing Xie, Xu Han, Zhiyuan Liu, Fen Lin, Leyu Lin, Maosong Sun

To establish the bidirectional connections between OpenRE and relation hierarchy, we propose the task of open hierarchical relation extraction and present a novel OHRE framework for the task.

Clustering Relation +2

Learning to Expand Audience via Meta Hybrid Experts and Critics for Recommendation and Advertising

3 code implementations31 May 2021 Yongchun Zhu, Yudan Liu, Ruobing Xie, Fuzhen Zhuang, Xiaobo Hao, Kaikai Ge, Xu Zhang, Leyu Lin, Juan Cao

Besides, MetaHeac has been successfully deployed in WeChat for the promotion of both contents and advertisements, leading to great improvement in the quality of marketing.

Marketing Meta-Learning +1

Transfer-Meta Framework for Cross-domain Recommendation to Cold-Start Users

no code implementations11 May 2021 Yongchun Zhu, Kaikai Ge, Fuzhen Zhuang, Ruobing Xie, Dongbo Xi, Xu Zhang, Leyu Lin, Qing He

With the advantage of meta learning which has good generalization ability to novel tasks, we propose a transfer-meta framework for CDR (TMCDR) which has a transfer stage and a meta stage.

Meta-Learning Recommendation Systems

Long Short-Term Temporal Meta-learning in Online Recommendation

no code implementations8 May 2021 Ruobing Xie, Yalong Wang, Rui Wang, Yuanfu Lu, Yuanhang Zou, Feng Xia, Leyu Lin

An effective online recommendation system should jointly capture users' long-term and short-term preferences in both users' internal behaviors (from the target recommendation task) and external behaviors (from other tasks).

Meta-Learning

Understanding WeChat User Preferences and "Wow" Diffusion

1 code implementation4 Mar 2021 Fanjin Zhang, Jie Tang, Xueyi Liu, Zhenyu Hou, Yuxiao Dong, Jing Zhang, Xiao Liu, Ruobing Xie, Kai Zhuang, Xu Zhang, Leyu Lin, Philip S. Yu

"Top Stories" is a novel friend-enhanced recommendation engine in WeChat, in which users can read articles based on preferences of both their own and their friends.

Graph Representation Learning Social and Information Networks

UPRec: User-Aware Pre-training for Recommender Systems

no code implementations22 Feb 2021 Chaojun Xiao, Ruobing Xie, Yuan YAO, Zhiyuan Liu, Maosong Sun, Xu Zhang, Leyu Lin

Existing sequential recommendation methods rely on large amounts of training data and usually suffer from the data sparsity problem.

Self-Supervised Learning Sequential Recommendation

Improving Accuracy and Diversity in Matching of Recommendation with Diversified Preference Network

no code implementations7 Feb 2021 Ruobing Xie, Qi Liu, Shukai Liu, Ziwei Zhang, Peng Cui, Bo Zhang, Leyu Lin

In this paper, we propose a novel Heterogeneous graph neural network framework for diversified recommendation (GraphDR) in matching to improve both recommendation accuracy and diversity.

Diversity Graph Attention +1

Denoising Relation Extraction from Document-level Distant Supervision

1 code implementation EMNLP 2020 Chaojun Xiao, Yuan YAO, Ruobing Xie, Xu Han, Zhiyuan Liu, Maosong Sun, Fen Lin, Leyu Lin

Distant supervision (DS) has been widely used to generate auto-labeled data for sentence-level relation extraction (RE), which improves RE performance.

Denoising Document-level Relation Extraction +2

Knowledge Transfer via Pre-training for Recommendation: A Review and Prospect

no code implementations19 Sep 2020 Zheni Zeng, Chaojun Xiao, Yuan YAO, Ruobing Xie, Zhiyuan Liu, Fen Lin, Leyu Lin, Maosong Sun

Recommender systems aim to provide item recommendations for users, and are usually faced with data sparsity problem (e. g., cold start) in real-world scenarios.

Recommendation Systems Transfer Learning

Connecting Embeddings for Knowledge Graph Entity Typing

1 code implementation ACL 2020 Yu Zhao, Anxiang Zhang, Ruobing Xie, Kang Liu, Xiaojie Wang

In this paper, we propose a novel approach for KG entity typing which is trained by jointly utilizing local typing knowledge from existing entity type assertions and global triple knowledge from KGs.

Entity Typing Knowledge Graph Completion +1

FAQ-based Question Answering via Knowledge Anchors

no code implementations14 Nov 2019 Ruobing Xie, Yanan Lu, Fen Lin, Leyu Lin

In this paper, we propose a novel Knowledge Anchor based Question Answering (KAQA) framework for FAQ-based QA to better understand questions and retrieve more appropriate answers.

graph construction Knowledge Graphs +2

Neural Snowball for Few-Shot Relation Learning

1 code implementation29 Aug 2019 Tianyu Gao, Xu Han, Ruobing Xie, Zhiyuan Liu, Fen Lin, Leyu Lin, Maosong Sun

To address new relations with few-shot instances, we propose a novel bootstrapping approach, Neural Snowball, to learn new relations by transferring semantic knowledge about existing relations.

Knowledge Graphs Relation +1

Knowledge Representation Learning: A Quantitative Review

2 code implementations28 Dec 2018 Yankai Lin, Xu Han, Ruobing Xie, Zhiyuan Liu, Maosong Sun

Knowledge representation learning (KRL) aims to represent entities and relations in knowledge graph in low-dimensional semantic space, which have been widely used in massive knowledge-driven tasks.

General Classification Information Retrieval +8

Cross-lingual Lexical Sememe Prediction

1 code implementation EMNLP 2018 Fanchao Qi, Yankai Lin, Maosong Sun, Hao Zhu, Ruobing Xie, Zhiyuan Liu

We propose a novel framework to model correlations between sememes and multi-lingual words in low-dimensional semantic space for sememe prediction.

Learning Word Embeddings Multilingual Word Embeddings +1

Incorporating Chinese Characters of Words for Lexical Sememe Prediction

1 code implementation ACL 2018 Huiming Jin, Hao Zhu, Zhiyuan Liu, Ruobing Xie, Maosong Sun, Fen Lin, Leyu Lin

However, existing methods of lexical sememe prediction typically rely on the external context of words to represent the meaning, which usually fails to deal with low-frequency and out-of-vocabulary words.

Common Sense Reasoning Prediction

Improved Word Representation Learning with Sememes

1 code implementation ACL 2017 Yilin Niu, Ruobing Xie, Zhiyuan Liu, Maosong Sun

The key idea is to utilize word sememes to capture exact meanings of a word within specific contexts accurately.

Common Sense Reasoning Language Modeling +7

Does William Shakespeare REALLY Write Hamlet? Knowledge Representation Learning with Confidence

1 code implementation9 May 2017 Ruobing Xie, Zhiyuan Liu, Fen Lin, Leyu Lin

Experimental results demonstrate that our confidence-aware models achieve significant and consistent improvements on all tasks, which confirms the capability of CKRL modeling confidence with structural information in both KG noise detection and knowledge representation learning.

Representation Learning Triple Classification

Neural Emoji Recommendation in Dialogue Systems

no code implementations14 Dec 2016 Ruobing Xie, Zhiyuan Liu, Rui Yan, Maosong Sun

It indicates that our method could well capture the contextual information and emotion flow in dialogues, which is significant for emoji recommendation.

General Classification

Image-embodied Knowledge Representation Learning

1 code implementation22 Sep 2016 Ruobing Xie, Zhiyuan Liu, Huanbo Luan, Maosong Sun

More specifically, we first construct representations for all images of an entity with a neural image encoder.

General Classification Representation Learning +1

Knowledge Representation via Joint Learning of Sequential Text and Knowledge Graphs

no code implementations22 Sep 2016 Jiawei Wu, Ruobing Xie, Zhiyuan Liu, Maosong Sun

There are two main challenges for constructing knowledge representations from plain texts: (1) How to take full advantages of sequential contexts of entities in plain texts for KRL.

Informativeness Knowledge Graphs +4

Cannot find the paper you are looking for? You can Submit a new open access paper.