Search Results for author: Dawei Yin

Found 121 papers, 48 papers with code

Original Content Is All You Need! an Empirical Study on Leveraging Answer Summary for WikiHowQA Answer Selection Task

no code implementations COLING 2022 Liang Wen, Juan Li, Houfeng Wang, Yingwei Luo, Xiaolin Wang, Xiaodong Zhang, Zhicong Cheng, Dawei Yin

And their experiments show that leveraging the answer summaries helps to attend the essential information in original lengthy answers and improve the answer selection performance under certain circumstances.

Answer Selection

Generative Pre-trained Ranking Model with Over-parameterization at Web-Scale (Extended Abstract)

no code implementations25 Sep 2024 Yuchen Li, Haoyi Xiong, Linghe Kong, Jiang Bian, Shuaiqiang Wang, Guihai Chen, Dawei Yin

Learning to rank (LTR) is widely employed in web searches to prioritize pertinent webpages from retrieved content based on input queries.

Learning-To-Rank

LLMs + Persona-Plug = Personalized LLMs

no code implementations18 Sep 2024 Jiongnan Liu, Yutao Zhu, Shuting Wang, Xiaochi Wei, Erxue Min, Yu Lu, Shuaiqiang Wang, Dawei Yin, Zhicheng Dou

By attaching this embedding to the task input, LLMs can better understand and capture user habits and preferences, thereby producing more personalized outputs without tuning their own parameters.

Language Modelling

GenCRF: Generative Clustering and Reformulation Framework for Enhanced Intent-Driven Information Retrieval

no code implementations17 Sep 2024 Wonduk Seo, Haojie Zhang, Yueyang Zhang, Changhao Zhang, Songyao Duan, Lixin Su, Daiting Shi, Jiashu Zhao, Dawei Yin

Query reformulation is a well-known problem in Information Retrieval (IR) aimed at enhancing single search successful completion rate by automatically modifying user's input query.

Information Retrieval Retrieval

OpenCity: Open Spatio-Temporal Foundation Models for Traffic Prediction

1 code implementation16 Aug 2024 Zhonghang Li, Long Xia, Lei Shi, Yong Xu, Dawei Yin, Chao Huang

Accurate traffic forecasting is crucial for effective urban planning and transportation management, enabling efficient resource allocation and enhanced travel experiences.

Traffic Prediction Zero-shot Generalization

DaRec: A Disentangled Alignment Framework for Large Language Model and Recommender System

no code implementations15 Aug 2024 Xihong Yang, Heming Jing, Zixing Zhang, Jindong Wang, Huakang Niu, Shuaiqiang Wang, Yu Lu, Junfeng Wang, Dawei Yin, Xinwang Liu, En Zhu, Defu Lian, Erxue Min

In this work, we prove that directly aligning the representations of LLMs and collaborative models is sub-optimal for enhancing downstream recommendation tasks performance, based on the information theorem.

Contrastive Learning Language Modelling +3

Powerful and Flexible: Personalized Text-to-Image Generation via Reinforcement Learning

1 code implementation9 Jul 2024 Fanyue Wei, Wei Zeng, Zhenyang Li, Dawei Yin, Lixin Duan, Wen Li

Personalized text-to-image models allow users to generate varied styles of images (specified with a sentence) for an object (specified with a set of reference images).

Sentence Text-to-Image Generation

InverseCoder: Unleashing the Power of Instruction-Tuned Code LLMs with Inverse-Instruct

1 code implementation8 Jul 2024 Yutong Wu, Di Huang, Wenxuan Shi, Wei Wang, Lingzhe Gao, Shihao Liu, Ziyuan Nan, Kaizhao Yuan, Rui Zhang, Xishan Zhang, Zidong Du, Qi Guo, Yewen Pu, Dawei Yin, Xing Hu, Yunji Chen

Recent advancements in open-source code large language models (LLMs) have demonstrated remarkable coding abilities by fine-tuning on the data generated from powerful closed-source LLMs such as GPT-3. 5 and GPT-4 for instruction tuning.

Code Generation Code Summarization +1

When Search Engine Services meet Large Language Models: Visions and Challenges

no code implementations28 Jun 2024 Haoyi Xiong, Jiang Bian, Yuchen Li, Xuhong LI, Mengnan Du, Shuaiqiang Wang, Dawei Yin, Sumi Helal

Combining Large Language Models (LLMs) with search engine services marks a significant shift in the field of services computing, opening up new possibilities to enhance how we search for and retrieve information, understand content, and interact with internet services.

Learning-To-Rank

Hyperbolic Knowledge Transfer in Cross-Domain Recommendation System

no code implementations25 Jun 2024 Xin Yang, Heng Chang, Zhijian Lai, Jinze Yang, Xingrun Li, Yu Lu, Shuaiqiang Wang, Dawei Yin, Erxue Min

Cross-Domain Recommendation (CDR) seeks to utilize knowledge from different domains to alleviate the problem of data sparsity in the target recommendation domain, and it has been gaining more attention in recent years.

Contrastive Learning Recommendation Systems +2

Understanding the Collapse of LLMs in Model Editing

no code implementations17 Jun 2024 Wanli Yang, Fei Sun, Jiajun Tan, Xinyu Ma, Du Su, Dawei Yin, HuaWei Shen

Despite significant progress in model editing methods, their application in real-world scenarios remains challenging as they often cause large language models (LLMs) to collapse.

Model Editing

TourRank: Utilizing Large Language Models for Documents Ranking with a Tournament-Inspired Strategy

1 code implementation17 Jun 2024 Yiqun Chen, Qi Liu, Yi Zhang, Weiwei Sun, Daiting Shi, Jiaxin Mao, Dawei Yin

However, several significant challenges still persist in LLMs for ranking: (1) LLMs are constrained by limited input length, precluding them from processing a large number of documents simultaneously; (2) The output document sequence is influenced by the input order of documents, resulting in inconsistent ranking outcomes; (3) Achieving a balance between cost and ranking performance is quite challenging.

ATM: Adversarial Tuning Multi-agent System Makes a Robust Retrieval-Augmented Generator

no code implementations28 May 2024 Junda Zhu, Lingyong Yan, Haibo Shi, Dawei Yin, Lei Sha

The ATM steers the Generator to have a robust perspective of useful documents for question answering with the help of an auxiliary Attacker agent.

Information Retrieval Language Modelling +3

Tool Learning with Large Language Models: A Survey

1 code implementation28 May 2024 Changle Qu, Sunhao Dai, Xiaochi Wei, Hengyi Cai, Shuaiqiang Wang, Dawei Yin, Jun Xu, Ji-Rong Wen

In this survey, we focus on reviewing existing literature from the two primary aspects (1) why tool learning is beneficial and (2) how tool learning is implemented, enabling a comprehensive understanding of tool learning with LLMs.

Response Generation

Chain of Tools: Large Language Model is an Automatic Multi-tool Learner

no code implementations26 May 2024 Zhengliang Shi, Shen Gao, Xiuyi Chen, Yue Feng, Lingyong Yan, Haibo Shi, Dawei Yin, Zhumin Chen, Suzan Verberne, Zhaochun Ren

Augmenting large language models (LLMs) with external tools has emerged as a promising approach to extend their utility, empowering them to solve practical tasks.

Language Modelling Large Language Model

Towards Completeness-Oriented Tool Retrieval for Large Language Models

1 code implementation25 May 2024 Changle Qu, Sunhao Dai, Xiaochi Wei, Hengyi Cai, Shuaiqiang Wang, Dawei Yin, Jun Xu, Ji-Rong Wen

Existing tool retrieval methods primarily focus on semantic matching between user queries and tool descriptions, frequently leading to the retrieval of redundant, similar tools.

Retrieval

A Survey of Large Language Models for Graphs

1 code implementation10 May 2024 Xubin Ren, Jiabin Tang, Dawei Yin, Nitesh Chawla, Chao Huang

This survey aims to serve as a valuable resource for researchers and practitioners eager to leverage large language models in graph learning, and to inspire continued progress in this dynamic field.

Graph Learning Link Prediction +1

A Survey on RAG Meeting LLMs: Towards Retrieval-Augmented Large Language Models

no code implementations10 May 2024 Wenqi Fan, Yujuan Ding, Liangbo Ning, Shijie Wang, Hengyun Li, Dawei Yin, Tat-Seng Chua, Qing Li

Given the powerful abilities of RAG in providing the latest and helpful auxiliary information, Retrieval-Augmented Large Language Models (RA-LLMs) have emerged to harness external and authoritative knowledge bases, rather than solely relying on the model's internal knowledge, to augment the generation quality of LLMs.

Information Retrieval RAG +1

GOVERN: Gradient Orientation Vote Ensemble for Multi-Teacher Reinforced Distillation

no code implementations6 May 2024 Wenjie Zhou, Zhenxin Ding, Xiaodong Zhang, Haibo Shi, Junfeng Wang, Dawei Yin

Pre-trained language models have become an integral component of question-answering systems, achieving remarkable performance.

Knowledge Distillation Question Answering

The Real, the Better: Aligning Large Language Models with Online Human Behaviors

no code implementations1 May 2024 Guanying Jiang, Lingyong Yan, Haibo Shi, Dawei Yin

Large language model alignment is widely used and studied to avoid LLM producing unhelpful and harmful responses.

Language Modelling Large Language Model

Graph Machine Learning in the Era of Large Language Models (LLMs)

no code implementations23 Apr 2024 Wenqi Fan, Shijie Wang, Jiani Huang, Zhikai Chen, Yu Song, Wenzhuo Tang, Haitao Mao, Hui Liu, Xiaorui Liu, Dawei Yin, Qing Li

Meanwhile, graphs, especially knowledge graphs, are rich in reliable factual knowledge, which can be utilized to enhance the reasoning capabilities of LLMs and potentially alleviate their limitations such as hallucinations and the lack of explainability.

Few-Shot Learning Knowledge Graphs +1

XL$^2$Bench: A Benchmark for Extremely Long Context Understanding with Long-range Dependencies

no code implementations8 Apr 2024 Xuanfan Ni, Hengyi Cai, Xiaochi Wei, Shuaiqiang Wang, Dawei Yin, Piji Li

However, prior benchmarks create datasets that ostensibly cater to long-text comprehension by expanding the input of traditional tasks, which falls short to exhibit the unique characteristics of long-text understanding, including long dependency tasks and longer text length compatible with modern LLMs' context window size.

Long-Context Understanding Reading Comprehension

MA4DIV: Multi-Agent Reinforcement Learning for Search Result Diversification

no code implementations26 Mar 2024 Yiqun Chen, Jiaxin Mao, Yi Zhang, Dehong Ma, Long Xia, Jun Fan, Daiting Shi, Zhicong Cheng, Simiu Gu, Dawei Yin

The objective of search result diversification (SRD) is to ensure that selected documents cover as many different subtopics as possible.

Diversity Multi-agent Reinforcement Learning +2

Improving the Robustness of Large Language Models via Consistency Alignment

no code implementations21 Mar 2024 Yukun Zhao, Lingyong Yan, Weiwei Sun, Guoliang Xing, Shuaiqiang Wang, Chong Meng, Zhicong Cheng, Zhaochun Ren, Dawei Yin

The training process is accomplished by self-rewards inferred from the trained model at the first stage without referring to external human preference resources.

Diversity Instruction Following +1

Learning to Use Tools via Cooperative and Interactive Agents

1 code implementation5 Mar 2024 Zhengliang Shi, Shen Gao, Xiuyi Chen, Yue Feng, Lingyong Yan, Haibo Shi, Dawei Yin, Pengjie Ren, Suzan Verberne, Zhaochun Ren

To mitigate these problems, we propose ConAgents, a Cooperative and interactive Agents framework, which coordinates three specialized agents for tool selection, tool execution, and action calibration separately.

UrbanGPT: Spatio-Temporal Large Language Models

2 code implementations25 Feb 2024 Zhonghang Li, Lianghao Xia, Jiabin Tang, Yong Xu, Lei Shi, Long Xia, Dawei Yin, Chao Huang

These findings highlight the potential of building large language models for spatio-temporal learning, particularly in zero-shot scenarios where labeled data is scarce.

10-shot image generation

HiGPT: Heterogeneous Graph Language Model

1 code implementation25 Feb 2024 Jiabin Tang, Yuhao Yang, Wei Wei, Lei Shi, Long Xia, Dawei Yin, Chao Huang

However, existing frameworks for heterogeneous graph learning have limitations in generalizing across diverse heterogeneous graph datasets.

Graph Learning Language Modelling +1

The Good and The Bad: Exploring Privacy Issues in Retrieval-Augmented Generation (RAG)

1 code implementation23 Feb 2024 Shenglai Zeng, Jiankun Zhang, Pengfei He, Yue Xing, Yiding Liu, Han Xu, Jie Ren, Shuaiqiang Wang, Dawei Yin, Yi Chang, Jiliang Tang

In this work, we conduct extensive empirical studies with novel attack methods, which demonstrate the vulnerability of RAG systems on leaking the private retrieval database.

Language Modelling RAG +1

KnowTuning: Knowledge-aware Fine-tuning for Large Language Models

2 code implementations17 Feb 2024 Yougang Lyu, Lingyong Yan, Shuaiqiang Wang, Haibo Shi, Dawei Yin, Pengjie Ren, Zhumin Chen, Maarten de Rijke, Zhaochun Ren

To address these problems, we propose a knowledge-aware fine-tuning (KnowTuning) method to improve fine-grained and coarse-grained knowledge awareness of LLMs.

Question Answering

The Butterfly Effect of Model Editing: Few Edits Can Trigger Large Language Models Collapse

1 code implementation15 Feb 2024 Wanli Yang, Fei Sun, Xinyu Ma, Xun Liu, Dawei Yin, Xueqi Cheng

In this work, we reveal a critical phenomenon: even a single edit can trigger model collapse, manifesting as significant performance degradation in various benchmark tasks.

Benchmarking Model Editing

Text-Video Retrieval via Variational Multi-Modal Hypergraph Networks

no code implementations6 Jan 2024 Qian Li, Lixin Su, Jiashu Zhao, Long Xia, Hengyi Cai, Suqi Cheng, Hengzhu Tang, Junfeng Wang, Dawei Yin

Compared to conventional textual retrieval, the main obstacle for text-video retrieval is the semantic gap between the textual nature of queries and the visual richness of video content.

Retrieval Variational Inference +1

Agent4Ranking: Semantic Robust Ranking via Personalized Query Rewriting Using Multi-agent LLM

no code implementations24 Dec 2023 Xiaopeng Li, Lixin Su, Pengyue Jia, Xiangyu Zhao, Suqi Cheng, Junfeng Wang, Dawei Yin

To be specific, we use Chain of Thought (CoT) technology to utilize Large Language Models (LLMs) as agents to emulate various demographic profiles, then use them for efficient query rewriting, and we innovate a robust Multi-gate Mixture of Experts (MMoE) architecture coupled with a hybrid loss function, collectively strengthening the ranking models' robustness.

Towards Verifiable Text Generation with Evolving Memory and Self-Reflection

no code implementations14 Dec 2023 Hao Sun, Hengyi Cai, Bo wang, Yingyan Hou, Xiaochi Wei, Shuaiqiang Wang, Yan Zhang, Dawei Yin

Despite the remarkable ability of large language models (LLMs) in language comprehension and generation, they often suffer from producing factually incorrect information, also known as hallucination.

Hallucination Retrieval +1

Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers

1 code implementation2 Nov 2023 Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren

Furthermore, our approach surpasses the performance of existing supervised methods like monoT5 and is on par with the state-of-the-art zero-shot methods.

Prompt Engineering

LLMRec: Large Language Models with Graph Augmentation for Recommendation

1 code implementation1 Nov 2023 Wei Wei, Xubin Ren, Jiabin Tang, Qinyong Wang, Lixin Su, Suqi Cheng, Junfeng Wang, Dawei Yin, Chao Huang

By employing these strategies, we address the challenges posed by sparse implicit feedback and low-quality side information in recommenders.

Model Optimization Recommendation Systems

Embedding in Recommender Systems: A Survey

1 code implementation28 Oct 2023 Xiangyu Zhao, Maolin Wang, Xinjian Zhao, Jiansheng Li, Shucheng Zhou, Dawei Yin, Qing Li, Jiliang Tang, Ruocheng Guo

This survey covers embedding methods like collaborative filtering, self-supervised learning, and graph-based techniques.

AutoML Collaborative Filtering +3

Knowing What LLMs DO NOT Know: A Simple Yet Effective Self-Detection Method

no code implementations27 Oct 2023 Yukun Zhao, Lingyong Yan, Weiwei Sun, Guoliang Xing, Chong Meng, Shuaiqiang Wang, Zhicong Cheng, Zhaochun Ren, Dawei Yin

In this paper, we propose a novel self-detection method to detect which questions that a LLM does not know that are prone to generate nonfactual results.

PSP: Pre-Training and Structure Prompt Tuning for Graph Neural Networks

1 code implementation26 Oct 2023 Qingqing Ge, Zeyuan Zhao, Yiding Liu, Anfeng Cheng, Xiang Li, Shuaiqiang Wang, Dawei Yin

In particular, PSP 1) employs a dual-view contrastive learning to align the latent semantic spaces of node attributes and graph structure, and 2) incorporates structure information in prompted graph to construct more accurate prototype vectors and elicit more pre-trained knowledge in prompt tuning.

Contrastive Learning Graph Classification +1

Representation Learning with Large Language Models for Recommendation

1 code implementation24 Oct 2023 Xubin Ren, Wei Wei, Lianghao Xia, Lixin Su, Suqi Cheng, Junfeng Wang, Dawei Yin, Chao Huang

RLMRec incorporates auxiliary textual signals, develops a user/item profiling paradigm empowered by LLMs, and aligns the semantic space of LLMs with the representation space of collaborative relational signals through a cross-view alignment framework.

Recommendation Systems Representation Learning

GraphGPT: Graph Instruction Tuning for Large Language Models

1 code implementation19 Oct 2023 Jiabin Tang, Yuhao Yang, Wei Wei, Lei Shi, Lixin Su, Suqi Cheng, Dawei Yin, Chao Huang

The open-sourced model implementation of our GraphGPT is available at https://github. com/HKUDS/GraphGPT.

Data Augmentation Graph Learning +2

Exploring Memorization in Fine-tuned Language Models

no code implementations10 Oct 2023 Shenglai Zeng, Yaxin Li, Jie Ren, Yiding Liu, Han Xu, Pengfei He, Yue Xing, Shuaiqiang Wang, Jiliang Tang, Dawei Yin

In this work, we conduct the first comprehensive analysis to explore language models' (LMs) memorization during fine-tuning across tasks.

Memorization

Unsupervised Large Language Model Alignment for Information Retrieval via Contrastive Feedback

no code implementations29 Sep 2023 Qian Dong, Yiding Liu, Qingyao Ai, Zhijing Wu, Haitao Li, Yiqun Liu, Shuaiqiang Wang, Dawei Yin, Shaoping Ma

Large language models (LLMs) have demonstrated remarkable capabilities across various research domains, including the field of Information Retrieval (IR).

Data Augmentation Information Retrieval +4

Explainability for Large Language Models: A Survey

no code implementations2 Sep 2023 Haiyan Zhao, Hanjie Chen, Fan Yang, Ninghao Liu, Huiqi Deng, Hengyi Cai, Shuaiqiang Wang, Dawei Yin, Mengnan Du

For each paradigm, we summarize the goals and dominant approaches for generating local explanations of individual predictions and global explanations of overall model knowledge.

Exploring the Potential of Large Language Models (LLMs) in Learning on Graphs

2 code implementations7 Jul 2023 Zhikai Chen, Haitao Mao, Hang Li, Wei Jin, Hongzhi Wen, Xiaochi Wei, Shuaiqiang Wang, Dawei Yin, Wenqi Fan, Hui Liu, Jiliang Tang

The most popular pipeline for learning on graphs with textual node attributes primarily relies on Graph Neural Networks (GNNs), and utilizes shallow text embedding as initial node representations, which has limitations in general knowledge and profound semantic understanding.

General Knowledge Node Classification

I^3 Retriever: Incorporating Implicit Interaction in Pre-trained Language Models for Passage Retrieval

1 code implementation4 Jun 2023 Qian Dong, Yiding Liu, Qingyao Ai, Haitao Li, Shuaiqiang Wang, Yiqun Liu, Dawei Yin, Shaoping Ma

Moreover, the proposed implicit interaction is compatible with special pre-training and knowledge distillation for passage retrieval, which brings a new state-of-the-art performance.

Knowledge Distillation Passage Retrieval +2

Pretrained Language Model based Web Search Ranking: From Relevance to Satisfaction

no code implementations2 Jun 2023 Canjia Li, Xiaoyang Wang, Dongdong Li, Yiding Liu, Yu Lu, Shuaiqiang Wang, Zhicong Cheng, Simiu Gu, Dawei Yin

In this work, we focus on ranking user satisfaction rather than relevance in web search, and propose a PLM-based framework, namely SAT-Ranker, which comprehensively models different dimensions of user satisfaction in a unified manner.

Language Modelling

Semantic-Enhanced Differentiable Search Index Inspired by Learning Strategies

no code implementations24 May 2023 Yubao Tang, Ruqing Zhang, Jiafeng Guo, Jiangui Chen, Zuowei Zhu, Shuaiqiang Wang, Dawei Yin, Xueqi Cheng

Specifically, we assign each document an Elaborative Description based on the query generation technique, which is more meaningful than a string of integers in the original DSI; and (2) For the associations between a document and its identifier, we take inspiration from Rehearsal Strategies in human learning.

Memorization Retrieval

Unconfounded Propensity Estimation for Unbiased Ranking

no code implementations17 May 2023 Dan Luo, Lixin Zou, Qingyao Ai, Zhiyu Chen, Chenliang Li, Dawei Yin, Brian D. Davison

The goal of unbiased learning to rank (ULTR) is to leverage implicit user feedback for optimizing learning-to-rank systems.

Learning-To-Rank

Boosting Event Extraction with Denoised Structure-to-Text Augmentation

no code implementations16 May 2023 Bo wang, Heyan Huang, Xiaochi Wei, Ge Shi, Xiao Liu, Chong Feng, Tong Zhou, Shuaiqiang Wang, Dawei Yin

Event extraction aims to recognize pre-defined event triggers and arguments from texts, which suffer from the lack of high-quality annotations.

Event Extraction Text Augmentation +1

Disentangled Contrastive Collaborative Filtering

1 code implementation4 May 2023 Xubin Ren, Lianghao Xia, Jiashu Zhao, Dawei Yin, Chao Huang

Recent studies show that graph neural networks (GNNs) are prevalent to model high-order relationships for collaborative filtering (CF).

Collaborative Filtering Contrastive Learning +1

User Retention-oriented Recommendation with Decision Transformer

1 code implementation11 Mar 2023 Kesen Zhao, Lixin Zou, Xiangyu Zhao, Maolin Wang, Dawei Yin

However, deploying the DT in recommendation is a non-trivial problem because of the following challenges: (1) deficiency in modeling the numerical reward value; (2) data discrepancy between the policy learning and recommendation generation; (3) unreliable offline performance evaluation.

Contrastive Learning counterfactual +1

Layout-aware Webpage Quality Assessment

no code implementations28 Jan 2023 Anfeng Cheng, Yiding Liu, Weibin Li, Qian Dong, Shuaiqiang Wang, Zhengjie Huang, Shikun Feng, Zhicong Cheng, Dawei Yin

To assess webpage quality from complex DOM tree data, we propose a graph neural network (GNN) based method that extracts rich layout-aware information that implies webpage quality in an end-to-end manner.

Graph Neural Network

Feature-Level Debiased Natural Language Understanding

1 code implementation11 Dec 2022 Yougang Lyu, Piji Li, Yechang Yang, Maarten de Rijke, Pengjie Ren, Yukun Zhao, Dawei Yin, Zhaochun Ren

We also propose a dynamic negative sampling strategy to capture the dynamic influence of biases by employing a bias-only model to dynamically select the most similar biased negative samples.

Contrastive Learning Natural Language Understanding

PILE: Pairwise Iterative Logits Ensemble for Multi-Teacher Labeled Distillation

no code implementations11 Nov 2022 Lianshang Cai, Linhao Zhang, Dehong Ma, Jun Fan, Daiting Shi, Yi Wu, Zhicong Cheng, Simiu Gu, Dawei Yin

In this paper, we focus on two key questions in knowledge distillation for ranking models: 1) how to ensemble knowledge from multi-teacher; 2) how to utilize the label information of data in the distillation process.

Knowledge Distillation

Whole Page Unbiased Learning to Rank

no code implementations19 Oct 2022 Haitao Mao, Lixin Zou, Yujia Zheng, Jiliang Tang, Xiaokai Chu, Jiashu Zhao, Qian Wang, Dawei Yin

To address the above challenges, we propose a Bias Agnostic whole-page unbiased Learning to rank algorithm, named BAL, to automatically find the user behavior model with causal discovery and mitigate the biases induced by multiple SERP features with no specific design.

Causal Discovery Information Retrieval +2

CPS-MEBR: Click Feedback-Aware Web Page Summarization for Multi-Embedding-Based Retrieval

no code implementations18 Oct 2022 Wenbiao Li, Pan Tang, Zhengfan Wu, Weixue Lu, Minghua Zhang, Zhenlei Tian, Daiting Shi, Yu Sun, Simiu Gu, Dawei Yin

Meanwhile, we introduce sentence-level semantic interaction to design a multi-embedding-based retrieval (MEBR) model, which can generate multiple embeddings to deal with different potential queries by using frequently clicked sentences in web pages.

Retrieval Sentence

Approximated Doubly Robust Search Relevance Estimation

no code implementations16 Aug 2022 Lixin Zou, Changying Hao, Hengyi Cai, Suqi Cheng, Shuaiqiang Wang, Wenwen Ye, Zhicong Cheng, Simiu Gu, Dawei Yin

We further instantiate the proposed unbiased relevance estimation framework in Baidu search, with comprehensive practical solutions designed regarding the data pipeline for click behavior tracking and online relevance estimation with an approximated deep neural network.

counterfactual

Model-based Unbiased Learning to Rank

1 code implementation24 Jul 2022 Dan Luo, Lixin Zou, Qingyao Ai, Zhiyu Chen, Dawei Yin, Brian D. Davison

Existing methods in unbiased learning to rank typically rely on click modeling or inverse propensity weighting (IPW).

Information Retrieval Learning-To-Rank +1

Factorized and Controllable Neural Re-Rendering of Outdoor Scene for Photo Extrapolation

no code implementations14 Jul 2022 Boming Zhao, Bangbang Yang, Zhenyang Li, Zuoyue Li, Guofeng Zhang, Jiashu Zhao, Dawei Yin, Zhaopeng Cui, Hujun Bao

Expanding an existing tourist photo from a partially captured scene to a full scene is one of the desired experiences for photography applications.

A Large Scale Search Dataset for Unbiased Learning to Rank

1 code implementation7 Jul 2022 Lixin Zou, Haitao Mao, Xiaokai Chu, Jiliang Tang, Wenwen Ye, Shuaiqiang Wang, Dawei Yin

The unbiased learning to rank (ULTR) problem has been greatly advanced by recent deep learning techniques and well-designed debias algorithms.

Causal Discovery Language Modelling +3

Geometry Contrastive Learning on Heterogeneous Graphs

1 code implementation25 Jun 2022 Shichao Zhu, Chuan Zhou, Anfeng Cheng, Shirui Pan, Shuaiqiang Wang, Dawei Yin, Bin Wang

Self-supervised learning (especially contrastive learning) methods on heterogeneous graphs can effectively get rid of the dependence on supervisory data.

Contrastive Learning Node Classification +3

Are Message Passing Neural Networks Really Helpful for Knowledge Graph Completion?

1 code implementation21 May 2022 Juanhui Li, Harry Shomer, Jiayuan Ding, Yiqi Wang, Yao Ma, Neil Shah, Jiliang Tang, Dawei Yin

This suggests a conflation of scoring function design, loss function design, and MP in prior work, with promising insights regarding the scalability of state-of-the-art KGC methods today, as well as careful attention to more suitable MP designs for KGC tasks tomorrow.

A Simple yet Effective Framework for Active Learning to Rank

no code implementations20 May 2022 Qingzhong Wang, Haifang Li, Haoyi Xiong, Wen Wang, Jiang Bian, Yu Lu, Shuaiqiang Wang, Zhicong Cheng, Dejing Dou, Dawei Yin

To handle the diverse query requests from users at web-scale, Baidu has done tremendous efforts in understanding users' queries, retrieve relevant contents from a pool of trillions of webpages, and rank the most relevant webpages on the top of results.

Active Learning Learning-To-Rank

ERNIE-Search: Bridging Cross-Encoder with Dual-Encoder via Self On-the-fly Distillation for Dense Passage Retrieval

no code implementations18 May 2022 Yuxiang Lu, Yiding Liu, Jiaxiang Liu, Yunsheng Shi, Zhengjie Huang, Shikun Feng Yu Sun, Hao Tian, Hua Wu, Shuaiqiang Wang, Dawei Yin, Haifeng Wang

Our method 1) introduces a self on-the-fly distillation method that can effectively distill late interaction (i. e., ColBERT) to vanilla dual-encoder, and 2) incorporates a cascade distillation process to further improve the performance with a cross-encoder teacher.

Knowledge Distillation Open-Domain Question Answering +2

Hypergraph Contrastive Collaborative Filtering

1 code implementation26 Apr 2022 Lianghao Xia, Chao Huang, Yong Xu, Jiashu Zhao, Dawei Yin, Jimmy Xiangji Huang

Additionally, our HCCF model effectively integrates the hypergraph structure encoding with self-supervised learning to reinforce the representation quality of recommender systems, based on the hypergraph-enhanced self-discrimination.

Collaborative Filtering Contrastive Learning +2

Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking

no code implementations25 Apr 2022 Qian Dong, Yiding Liu, Suqi Cheng, Shuaiqiang Wang, Zhicong Cheng, Shuzi Niu, Dawei Yin

To leverage a reliable knowledge, we propose a novel knowledge graph distillation method and obtain a knowledge meta graph as the bridge between query and passage.

Graph Neural Network Natural Language Understanding +3

Graph Enhanced BERT for Query Understanding

no code implementations3 Apr 2022 Juanhui Li, Yao Ma, Wei Zeng, Suqi Cheng, Jiliang Tang, Shuaiqiang Wang, Dawei Yin

In other words, GE-BERT can capture both the semantic information and the users' search behavioral information of queries.

Sequential Recommendation with User Evolving Preference Decomposition

no code implementations31 Mar 2022 Weiqi Shao, Xu Chen, Long Xia, Jiashu Zhao, Dawei Yin

To solve this problem, in this paper, we propose a novel sequential recommender model via decomposing and modeling user independent preferences.

Sequential Recommendation

Contrastive Meta Learning with Behavior Multiplicity for Recommendation

1 code implementation17 Feb 2022 Wei Wei, Chao Huang, Lianghao Xia, Yong Xu, Jiashu Zhao, Dawei Yin

In addition, to capture the diverse multi-behavior patterns, we design a contrastive meta network to encode the customized behavior heterogeneity for different users.

Contrastive Learning Meta-Learning

Gumble Softmax For User Behavior Modeling

no code implementations6 Dec 2021 Weiqi Shao, Xu Chen, Jiashu Zhao, Long Xia, Dawei Yin

We propose a sequential model with dynamic number of representations for recommendation systems (RDRSR).

Sequential Recommendation

User behavior understanding in real world settings

no code implementations6 Dec 2021 Weiqi Shao, Xu Chen, Jiashu Zhao, Long Xia, Dawei Yin

It is necessary to learn a dynamic group of representations according the item groups in a user historical behavior.

Global Context Enhanced Social Recommendation with Hierarchical Graph Neural Networks

1 code implementation8 Oct 2021 Huance Xu, Chao Huang, Yong Xu, Lianghao Xia, Hao Xing, Dawei Yin

Social recommendation which aims to leverage social connections among users to enhance the recommendation performance.

Graph Neural Network

On Length Divergence Bias in Textual Matching Models

no code implementations Findings (ACL) 2022 Lan Jiang, Tianshu Lyu, Yankai Lin, Meng Chong, Xiaoyong Lyu, Dawei Yin

To determine whether TM models have adopted such heuristic, we introduce an adversarial evaluation scheme which invalidates the heuristic.

Semantic Similarity Semantic Textual Similarity

Enhancing Question Generation with Commonsense Knowledge

no code implementations CCL 2021 Xin Jia, Hao Wang, Dawei Yin, Yunfang Wu

Question generation (QG) is to generate natural and grammatical questions that can be answered by a specific answer for a given context.

Multi-Task Learning Question Generation +2

Enhanced Doubly Robust Learning for Debiasing Post-click Conversion Rate Estimation

1 code implementation28 May 2021 Siyuan Guo, Lixin Zou, Yiding Liu, Wenwen Ye, Suqi Cheng, Shuaiqiang Wang, Hechang Chen, Dawei Yin, Yi Chang

Based on it, a more robust doubly robust (MRDR) estimator has been proposed to further reduce its variance while retaining its double robustness.

counterfactual Imputation +2

Pre-trained Language Model based Ranking in Baidu Search

no code implementations24 May 2021 Lixin Zou, Shengqiang Zhang, Hengyi Cai, Dehong Ma, Suqi Cheng, Daiting Shi, Zhifan Zhu, Weiyue Su, Shuaiqiang Wang, Zhicong Cheng, Dawei Yin

However, it is nontrivial to directly apply these PLM-based rankers to the large-scale web search system due to the following challenging issues:(1) the prohibitively expensive computations of massive neural PLMs, especially for long texts in the web-document, prohibit their deployments in an online ranking system that demands extremely low latency;(2) the discrepancy between existing ranking-agnostic pre-training objectives and the ad-hoc retrieval scenarios that demand comprehensive relevance modeling is another main barrier for improving the online ranking system;(3) a real-world search engine typically involves a committee of ranking components, and thus the compatibility of the individually fine-tuned ranking model is critical for a cooperative ranking system.

Language Modelling Retrieval

Data-Efficient Reinforcement Learning for Malaria Control

no code implementations4 May 2021 Lixin Zou, Long Xia, Linfang Hou, Xiangyu Zhao, Dawei Yin

This work introduces a practical, data-efficient policy learning method, named Variance-Bonus Monte Carlo Tree Search~(VB-MCTS), which can copy with very little data and facilitate learning from scratch in only a few trials.

Decision Making Model-based Reinforcement Learning +3

First Target and Opinion then Polarity: Enhancing Target-opinion Correlation for Aspect Sentiment Triplet Extraction

no code implementations17 Feb 2021 Lianzhe Huang, Peiyi Wang, Sujian Li, Tianyu Liu, Xiaodong Zhang, Zhicong Cheng, Dawei Yin, Houfeng Wang

Aspect Sentiment Triplet Extraction (ASTE) aims to extract triplets from a sentence, including target entities, associated sentiment polarities, and opinion spans which rationalize the polarities.

Aspect Sentiment Triplet Extraction Sentence +1

User-Inspired Posterior Network for Recommendation Reason Generation

no code implementations16 Feb 2021 Haolan Zhan, Hainan Zhang, Hongshen Chen, Lei Shen, Yanyan Lan, Zhuoye Ding, Dawei Yin

A simple and effective way is to extract keywords directly from the knowledge-base of products, i. e., attributes or title, as the recommendation reason.

Question Answering

SceneRec: Scene-Based Graph Neural Networks for Recommender Systems

no code implementations12 Feb 2021 Gang Wang, Ziyi Guo, Xiang Li, Dawei Yin, Shuai Ma

Collaborative filtering has been largely used to advance modern recommender systems to predict user preference.

Collaborative Filtering Recommendation Systems +1

Modeling Topical Relevance for Multi-Turn Dialogue Generation

no code implementations27 Sep 2020 Hainan Zhang, Yanyan Lan, Liang Pang, Hongshen Chen, Zhuoye Ding, Dawei Yin

Therefore, an ideal dialogue generation models should be able to capture the topic information of each context, detect the relevant context, and produce appropriate responses accordingly.

Dialogue Generation Sentence

Neural Interactive Collaborative Filtering

1 code implementation4 Jul 2020 Lixin Zou, Long Xia, Yulong Gu, Xiangyu Zhao, Weidong Liu, Jimmy Xiangji Huang, Dawei Yin

Therefore, the proposed exploration policy, to balance between learning the user profile and making accurate recommendations, can be directly optimized by maximizing users' long-term satisfaction with reinforcement learning.

Collaborative Filtering Meta-Learning +2

CAST: A Correlation-based Adaptive Spectral Clustering Algorithm on Multi-scale Data

1 code implementation8 Jun 2020 Xiang Li, Ben Kao, Caihua Shan, Dawei Yin, Martin Ester

We study the problem of applying spectral clustering to cluster multi-scale data, which is data whose clusters are of various sizes and densities.

Clustering

Robust Reinforcement Learning with Wasserstein Constraint

no code implementations1 Jun 2020 Linfang Hou, Liang Pang, Xin Hong, Yanyan Lan, Zhi-Ming Ma, Dawei Yin

Robust Reinforcement Learning aims to find the optimal policy with some extent of robustness to environmental dynamics.

reinforcement-learning Reinforcement Learning +1

Data Manipulation: Towards Effective Instance Learning for Neural Dialogue Generation via Learning to Augment and Reweight

no code implementations ACL 2020 Hengyi Cai, Hongshen Chen, Yonghao Song, Cheng Zhang, Xiaofang Zhao, Dawei Yin

In this paper, we propose a data manipulation framework to proactively reshape the data distribution towards reliable samples by augmenting and highlighting effective learning samples as well as reducing the effect of inefficient samples simultaneously.

Dialogue Generation

GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks

2 code implementations17 Jan 2020 Qiang Huang, Makoto Yamada, Yuan Tian, Dinesh Singh, Dawei Yin, Yi Chang

In this paper, we propose GraphLIME, a local interpretable model explanation for graphs using the Hilbert-Schmidt Independence Criterion (HSIC) Lasso, which is a nonlinear feature selection method.

Descriptive feature selection

Off-policy Learning for Multiple Loggers

no code implementations23 Jul 2019 Li He, Long Xia, Wei Zeng, Zhi-Ming Ma, Yihong Zhao, Dawei Yin

To make full use of such historical data, learning policies from multiple loggers becomes necessary.

counterfactual

Deep Social Collaborative Filtering

no code implementations16 Jul 2019 Wenqi Fan, Yao Ma, Dawei Yin, Jian-Ping Wang, Jiliang Tang, Qing Li

Meanwhile, most of these models treat neighbors' information equally without considering the specific recommendations.

Collaborative Filtering Recommendation Systems

Toward Simulating Environments in Reinforcement Learning Based Recommendations

no code implementations27 Jun 2019 Xiangyu Zhao, Long Xia, Lixin Zou, Dawei Yin, Jiliang Tang

Thus, it calls for a user simulator that can mimic real users' behaviors where we can pre-train and evaluate new recommendation algorithms.

Generative Adversarial Network Recommendation Systems +3

Graph Neural Networks for Social Recommendation

8 code implementations19 Feb 2019 Wenqi Fan, Yao Ma, Qing Li, Yuan He, Eric Zhao, Jiliang Tang, Dawei Yin

These advantages of GNNs provide great potential to advance social recommendation since data in social recommender systems can be represented as user-user social graph and user-item graph; and learning latent factors of users and items is the key.

Ranked #3 on Recommendation Systems on Epinions (using extra training data)

Graph Neural Network

Reinforcement Learning to Optimize Long-term User Engagement in Recommender Systems

no code implementations13 Feb 2019 Lixin Zou, Long Xia, Zhuoye Ding, Jiaxing Song, Weidong Liu, Dawei Yin

Though reinforcement learning~(RL) naturally fits the problem of maximizing the long term rewards, applying RL to optimize long-term user engagement is still facing challenges: user behaviors are versatile and difficult to model, which typically consists of both instant feedback~(e. g. clicks, ordering) and delayed feedback~(e. g. dwell time, revisit); in addition, performing effective off-policy learning is still immature, especially when combining bootstrapping and function approximation.

Recommendation Systems reinforcement-learning +2

Whole-Chain Recommendations

no code implementations11 Feb 2019 Xiangyu Zhao, Long Xia, Linxin Zou, Hui Liu, Dawei Yin, Jiliang Tang

With the recent prevalence of Reinforcement Learning (RL), there have been tremendous interests in developing RL-based recommender systems.

Multi-agent Reinforcement Learning Recommendation Systems +2

Product-Aware Answer Generation in E-Commerce Question-Answering

1 code implementation23 Jan 2019 Shen Gao, Zhaochun Ren, Yihong Eric Zhao, Dongyan Zhao, Dawei Yin, Rui Yan

In this paper, we propose the task of product-aware answer generation, which tends to generate an accurate and complete answer from large-scale unlabeled e-commerce reviews and product attributes.

Answer Generation Question Answering

Deep reinforcement learning for search, recommendation, and online advertising: a survey

no code implementations18 Dec 2018 Xiangyu Zhao, Long Xia, Jiliang Tang, Dawei Yin

Search, recommendation, and online advertising are the three most important information-providing mechanisms on the web.

reinforcement-learning Reinforcement Learning +1

Streaming Graph Neural Networks

2 code implementations24 Oct 2018 Yao Ma, Ziyi Guo, Zhaochun Ren, Eric Zhao, Jiliang Tang, Dawei Yin

Current graph neural network models cannot utilize the dynamic information in dynamic graphs.

Community Detection General Classification +4

Explicit State Tracking with Semi-Supervision for Neural Dialogue Generation

2 code implementations31 Aug 2018 Xisen Jin, Wenqiang Lei, Zhaochun Ren, Hongshen Chen, Shangsong Liang, Yihong Zhao, Dawei Yin

However, the \emph{expensive nature of state labeling} and the \emph{weak interpretability} make the dialogue state tracking a challenging problem for both task-oriented and non-task-oriented dialogue generation: For generating responses in task-oriented dialogues, state tracking is usually learned from manually annotated corpora, where the human annotation is expensive for training; for generating responses in non-task-oriented dialogues, most of existing work neglects the explicit state tracking due to the unlimited number of dialogue states.

Decoder Dialogue Generation +1

Linked Recurrent Neural Networks

no code implementations19 Aug 2018 Zhiwei Wang, Yao Ma, Dawei Yin, Jiliang Tang

Recurrent Neural Networks (RNNs) have been proven to be effective in modeling sequential data and they have been applied to boost a variety of tasks such as document classification, speech recognition and machine translation.

Document Classification Machine Translation +3

Multi-dimensional Graph Convolutional Networks

no code implementations18 Aug 2018 Yao Ma, Suhang Wang, Charu C. Aggarwal, Dawei Yin, Jiliang Tang

Convolutional neural networks (CNNs) leverage the great power in representation learning on regular grid data such as image and video.

Social and Information Networks

Knowledge Diffusion for Neural Dialogue Generation

1 code implementation ACL 2018 Shuman Liu, Hongshen Chen, Zhaochun Ren, Yang Feng, Qun Liu, Dawei Yin

Our empirical study on a real-world dataset prove that our model is capable of generating meaningful, diverse and natural responses for both factoid-questions and knowledge grounded chi-chats.

Dialogue Generation Question Answering +1

Deep Reinforcement Learning for Page-wise Recommendations

no code implementations7 May 2018 Xiangyu Zhao, Long Xia, Liang Zhang, Zhuoye Ding, Dawei Yin, Jiliang Tang

In particular, we propose a principled approach to jointly generate a set of complementary items and the corresponding strategy to display them in a 2-D page; and propose a novel page-wise recommendation framework based on deep reinforcement learning, DeepPage, which can optimize a page of items with proper display based on real-time feedback from users.

Recommendation Systems reinforcement-learning +2

Deep Reinforcement Learning for List-wise Recommendations

7 code implementations30 Dec 2017 Xiangyu Zhao, Liang Zhang, Long Xia, Zhuoye Ding, Dawei Yin, Jiliang Tang

Recommender systems play a crucial role in mitigating the problem of information overload by suggesting users' personalized items or services.

Recommendation Systems reinforcement-learning +2

Streaming Recommender Systems

no code implementations21 Jul 2016 Shiyu Chang, Yang Zhang, Jiliang Tang, Dawei Yin, Yi Chang, Mark A. Hasegawa-Johnson, Thomas S. Huang

The increasing popularity of real-world recommender systems produces data continuously and rapidly, and it becomes more realistic to study recommender systems under streaming scenarios.

Recommendation Systems

Consistent Collective Matrix Completion under Joint Low Rank Structure

no code implementations5 Dec 2014 Suriya Gunasekar, Makoto Yamada, Dawei Yin, Yi Chang

We address the collective matrix completion problem of jointly recovering a collection of matrices with shared structure from partial (and potentially noisy) observations.

Matrix Completion

Cannot find the paper you are looking for? You can Submit a new open access paper.