Search Results for author: Shuaiqiang Wang

Found 32 papers, 10 papers with code

KnowTuning: Knowledge-aware Fine-tuning for Large Language Models

2 code implementations17 Feb 2024 Yougang Lyu, Lingyong Yan, Shuaiqiang Wang, Haibo Shi, Dawei Yin, Pengjie Ren, Zhumin Chen, Maarten de Rijke, Zhaochun Ren

To address these problems, we propose a knowledge-aware fine-tuning (KnowTuning) method to explicitly and implicitly improve the knowledge awareness of LLMs.

Question Answering

Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers

1 code implementation2 Nov 2023 Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren

Furthermore, our approach surpasses the performance of existing supervised methods like monoT5 and is on par with the state-of-the-art zero-shot methods.

Prompt Engineering

Exploring the Potential of Large Language Models (LLMs) in Learning on Graphs

2 code implementations7 Jul 2023 Zhikai Chen, Haitao Mao, Hang Li, Wei Jin, Hongzhi Wen, Xiaochi Wei, Shuaiqiang Wang, Dawei Yin, Wenqi Fan, Hui Liu, Jiliang Tang

The most popular pipeline for learning on graphs with textual node attributes primarily relies on Graph Neural Networks (GNNs), and utilizes shallow text embedding as initial node representations, which has limitations in general knowledge and profound semantic understanding.

General Knowledge Node Classification

A Large Scale Search Dataset for Unbiased Learning to Rank

1 code implementation7 Jul 2022 Lixin Zou, Haitao Mao, Xiaokai Chu, Jiliang Tang, Wenwen Ye, Shuaiqiang Wang, Dawei Yin

The unbiased learning to rank (ULTR) problem has been greatly advanced by recent deep learning techniques and well-designed debias algorithms.

Causal Discovery Language Modelling +3

I^3 Retriever: Incorporating Implicit Interaction in Pre-trained Language Models for Passage Retrieval

1 code implementation4 Jun 2023 Qian Dong, Yiding Liu, Qingyao Ai, Haitao Li, Shuaiqiang Wang, Yiqun Liu, Dawei Yin, Shaoping Ma

Moreover, the proposed implicit interaction is compatible with special pre-training and knowledge distillation for passage retrieval, which brings a new state-of-the-art performance.

Knowledge Distillation Passage Retrieval +2

Enhanced Doubly Robust Learning for Debiasing Post-click Conversion Rate Estimation

1 code implementation28 May 2021 Siyuan Guo, Lixin Zou, Yiding Liu, Wenwen Ye, Suqi Cheng, Shuaiqiang Wang, Hechang Chen, Dawei Yin, Yi Chang

Based on it, a more robust doubly robust (MRDR) estimator has been proposed to further reduce its variance while retaining its double robustness.

counterfactual Imputation +2

The Good and The Bad: Exploring Privacy Issues in Retrieval-Augmented Generation (RAG)

1 code implementation23 Feb 2024 Shenglai Zeng, Jiankun Zhang, Pengfei He, Yue Xing, Yiding Liu, Han Xu, Jie Ren, Shuaiqiang Wang, Dawei Yin, Yi Chang, Jiliang Tang

In this work, we conduct extensive empirical studies with novel attack methods, which demonstrate the vulnerability of RAG systems on leaking the private retrieval database.

Language Modelling Retrieval

Geometry Contrastive Learning on Heterogeneous Graphs

1 code implementation25 Jun 2022 Shichao Zhu, Chuan Zhou, Anfeng Cheng, Shirui Pan, Shuaiqiang Wang, Dawei Yin, Bin Wang

Self-supervised learning (especially contrastive learning) methods on heterogeneous graphs can effectively get rid of the dependence on supervisory data.

Contrastive Learning Node Classification +3

Etymo: A New Discovery Engine for AI Research

no code implementations25 Jan 2018 Weijian Zhang, Jonathan Deakin, Nicholas J. Higham, Shuaiqiang Wang

We present Etymo (https://etymo. io), a discovery engine to facilitate artificial intelligence (AI) research and development.

Navigate

Pre-trained Language Model based Ranking in Baidu Search

no code implementations24 May 2021 Lixin Zou, Shengqiang Zhang, Hengyi Cai, Dehong Ma, Suqi Cheng, Daiting Shi, Zhifan Zhu, Weiyue Su, Shuaiqiang Wang, Zhicong Cheng, Dawei Yin

However, it is nontrivial to directly apply these PLM-based rankers to the large-scale web search system due to the following challenging issues:(1) the prohibitively expensive computations of massive neural PLMs, especially for long texts in the web-document, prohibit their deployments in an online ranking system that demands extremely low latency;(2) the discrepancy between existing ranking-agnostic pre-training objectives and the ad-hoc retrieval scenarios that demand comprehensive relevance modeling is another main barrier for improving the online ranking system;(3) a real-world search engine typically involves a committee of ranking components, and thus the compatibility of the individually fine-tuned ranking model is critical for a cooperative ranking system.

Language Modelling Retrieval

Graph Enhanced BERT for Query Understanding

no code implementations3 Apr 2022 Juanhui Li, Yao Ma, Wei Zeng, Suqi Cheng, Jiliang Tang, Shuaiqiang Wang, Dawei Yin

In other words, GE-BERT can capture both the semantic information and the users' search behavioral information of queries.

Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking

no code implementations25 Apr 2022 Qian Dong, Yiding Liu, Suqi Cheng, Shuaiqiang Wang, Zhicong Cheng, Shuzi Niu, Dawei Yin

To leverage a reliable knowledge, we propose a novel knowledge graph distillation method and obtain a knowledge meta graph as the bridge between query and passage.

Natural Language Understanding Passage Re-Ranking +2

ERNIE-Search: Bridging Cross-Encoder with Dual-Encoder via Self On-the-fly Distillation for Dense Passage Retrieval

no code implementations18 May 2022 Yuxiang Lu, Yiding Liu, Jiaxiang Liu, Yunsheng Shi, Zhengjie Huang, Shikun Feng Yu Sun, Hao Tian, Hua Wu, Shuaiqiang Wang, Dawei Yin, Haifeng Wang

Our method 1) introduces a self on-the-fly distillation method that can effectively distill late interaction (i. e., ColBERT) to vanilla dual-encoder, and 2) incorporates a cascade distillation process to further improve the performance with a cross-encoder teacher.

Knowledge Distillation Open-Domain Question Answering +2

A Simple yet Effective Framework for Active Learning to Rank

no code implementations20 May 2022 Qingzhong Wang, Haifang Li, Haoyi Xiong, Wen Wang, Jiang Bian, Yu Lu, Shuaiqiang Wang, Zhicong Cheng, Dejing Dou, Dawei Yin

To handle the diverse query requests from users at web-scale, Baidu has done tremendous efforts in understanding users' queries, retrieve relevant contents from a pool of trillions of webpages, and rank the most relevant webpages on the top of results.

Active Learning Learning-To-Rank

Approximated Doubly Robust Search Relevance Estimation

no code implementations16 Aug 2022 Lixin Zou, Changying Hao, Hengyi Cai, Suqi Cheng, Shuaiqiang Wang, Wenwen Ye, Zhicong Cheng, Simiu Gu, Dawei Yin

We further instantiate the proposed unbiased relevance estimation framework in Baidu search, with comprehensive practical solutions designed regarding the data pipeline for click behavior tracking and online relevance estimation with an approximated deep neural network.

counterfactual

Layout-aware Webpage Quality Assessment

no code implementations28 Jan 2023 Anfeng Cheng, Yiding Liu, Weibin Li, Qian Dong, Shuaiqiang Wang, Zhengjie Huang, Shikun Feng, Zhicong Cheng, Dawei Yin

To assess webpage quality from complex DOM tree data, we propose a graph neural network (GNN) based method that extracts rich layout-aware information that implies webpage quality in an end-to-end manner.

Boosting Event Extraction with Denoised Structure-to-Text Augmentation

no code implementations16 May 2023 Bo wang, Heyan Huang, Xiaochi Wei, Ge Shi, Xiao Liu, Chong Feng, Tong Zhou, Shuaiqiang Wang, Dawei Yin

Event extraction aims to recognize pre-defined event triggers and arguments from texts, which suffer from the lack of high-quality annotations.

Event Extraction Text Augmentation +1

Semantic-Enhanced Differentiable Search Index Inspired by Learning Strategies

no code implementations24 May 2023 Yubao Tang, Ruqing Zhang, Jiafeng Guo, Jiangui Chen, Zuowei Zhu, Shuaiqiang Wang, Dawei Yin, Xueqi Cheng

Specifically, we assign each document an Elaborative Description based on the query generation technique, which is more meaningful than a string of integers in the original DSI; and (2) For the associations between a document and its identifier, we take inspiration from Rehearsal Strategies in human learning.

Memorization Retrieval

Pretrained Language Model based Web Search Ranking: From Relevance to Satisfaction

no code implementations2 Jun 2023 Canjia Li, Xiaoyang Wang, Dongdong Li, Yiding Liu, Yu Lu, Shuaiqiang Wang, Zhicong Cheng, Simiu Gu, Dawei Yin

In this work, we focus on ranking user satisfaction rather than relevance in web search, and propose a PLM-based framework, namely SAT-Ranker, which comprehensively models different dimensions of user satisfaction in a unified manner.

Language Modelling

Explainability for Large Language Models: A Survey

no code implementations2 Sep 2023 Haiyan Zhao, Hanjie Chen, Fan Yang, Ninghao Liu, Huiqi Deng, Hengyi Cai, Shuaiqiang Wang, Dawei Yin, Mengnan Du

For each paradigm, we summarize the goals and dominant approaches for generating local explanations of individual predictions and global explanations of overall model knowledge.

Unsupervised Large Language Model Alignment for Information Retrieval via Contrastive Feedback

no code implementations29 Sep 2023 Qian Dong, Yiding Liu, Qingyao Ai, Zhijing Wu, Haitao Li, Yiqun Liu, Shuaiqiang Wang, Dawei Yin, Shaoping Ma

Large language models (LLMs) have demonstrated remarkable capabilities across various research domains, including the field of Information Retrieval (IR).

Data Augmentation Information Retrieval +4

Exploring Memorization in Fine-tuned Language Models

no code implementations10 Oct 2023 Shenglai Zeng, Yaxin Li, Jie Ren, Yiding Liu, Han Xu, Pengfei He, Yue Xing, Shuaiqiang Wang, Jiliang Tang, Dawei Yin

In this work, we conduct the first comprehensive analysis to explore language models' (LMs) memorization during fine-tuning across tasks.

Memorization

Knowing What LLMs DO NOT Know: A Simple Yet Effective Self-Detection Method

no code implementations27 Oct 2023 Yukun Zhao, Lingyong Yan, Weiwei Sun, Guoliang Xing, Chong Meng, Shuaiqiang Wang, Zhicong Cheng, Zhaochun Ren, Dawei Yin

In this paper, we propose a novel self-detection method to detect which questions that a LLM does not know that are prone to generate nonfactual results.

Exploiting Latent Attribute Interaction with Transformer on Heterogeneous Information Networks

no code implementations6 Nov 2023 Zeyuan Zhao, Qingqing Ge, Anfeng Cheng, Yiding Liu, Xiang Li, Shuaiqiang Wang

In addition, most of them only consider the interactions between nodes while neglecting the high-order information behind the latent interactions among different node features.

Attribute

Self-supervised Heterogeneous Graph Variational Autoencoders

no code implementations14 Nov 2023 Yige Zhao, Jianxiang Yu, Yao Cheng, Chengcheng Yu, Yiding Liu, Xiang Li, Shuaiqiang Wang

Instead of directly reconstructing raw features for attributed nodes, SHAVA generates the initial low-dimensional representation matrix for all the nodes, based on which raw features of attributed nodes are further reconstructed to leverage accurate attributes.

Attribute Graph Mining

Towards Verifiable Text Generation with Evolving Memory and Self-Reflection

no code implementations14 Dec 2023 Hao Sun, Hengyi Cai, Bo wang, Yingyan Hou, Xiaochi Wei, Shuaiqiang Wang, Yan Zhang, Dawei Yin

Despite the remarkable ability of large language models (LLMs) in language comprehension and generation, they often suffer from producing factually incorrect information, also known as hallucination.

Hallucination Retrieval +1

Improving the Robustness of Large Language Models via Consistency Alignment

no code implementations21 Mar 2024 Yukun Zhao, Lingyong Yan, Weiwei Sun, Guoliang Xing, Shuaiqiang Wang, Chong Meng, Zhicong Cheng, Zhaochun Ren, Dawei Yin

The training process is accomplished by self-rewards inferred from the trained model at the first stage without referring to external human preference resources.

Instruction Following Response Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.