Search Results for author: Xiangyang Li

Found 18 papers, 11 papers with code

Multi-view Content-aware Indexing for Long Document Retrieval

no code implementations23 Apr 2024 Kuicai Dong, Derrick Goh Xin Deik, Yi Quan Lee, Hao Zhang, Xiangyang Li, Cong Zhang, Yong liu

As they do not consider content structures, the resultant chunks can exclude vital information or include irrelevant content.

Lookahead Exploration with Neural Radiance Representation for Continuous Vision-Language Navigation

1 code implementation2 Apr 2024 Zihan Wang, Xiangyang Li, Jiahao Yang, Yeqi Liu, Junjie Hu, Ming Jiang, Shuqiang Jiang

Vision-and-language navigation (VLN) enables the agent to navigate to a remote location following the natural language instruction in 3D environments.

Navigate Vision and Language Navigation +1

HRLAIF: Improvements in Helpfulness and Harmlessness in Open-domain Reinforcement Learning From AI Feedback

no code implementations13 Mar 2024 Ang Li, Qiugen Xiao, Peng Cao, Jian Tang, Yi Yuan, Zijie Zhao, Xiaoyuan Chen, Liang Zhang, Xiangyang Li, Kaitong Yang, Weidong Guo, Yukang Gan, Xu Yu, Daniell Wang, Ying Shan

Using ChatGPT as a labeler to provide feedback on open-domain prompts in RLAIF training, we observe an increase in human evaluators' preference win ratio for model responses, but a decrease in evaluators' satisfaction rate.

Language Modelling Large Language Model +2

Instruction Fusion: Advancing Prompt Evolution through Hybridization

no code implementations25 Dec 2023 Weidong Guo, Jiuding Yang, Kaitong Yang, Xiangyang Li, Zhuwei Rao, Yu Xu, Di Niu

The fine-tuning of Large Language Models (LLMs) specialized in code generation has seen notable advancements through the use of open-domain coding queries.

Code Generation

A Unified Framework for Multi-Domain CTR Prediction via Large Language Models

1 code implementation17 Dec 2023 Zichuan Fu, Xiangyang Li, Chuhan Wu, Yichao Wang, Kuicai Dong, Xiangyu Zhao, Mengchen Zhao, Huifeng Guo, Ruiming Tang

Click-Through Rate (CTR) prediction is a crucial task in online recommendation platforms as it involves estimating the probability of user engagement with advertisements or items by clicking on them.

Click-Through Rate Prediction Language Modelling +2

FLIP: Towards Fine-grained Alignment between ID-based Models and Pretrained Language Models for CTR Prediction

no code implementations30 Oct 2023 Hangyu Wang, Jianghao Lin, Xiangyang Li, Bo Chen, Chenxu Zhu, Ruiming Tang, Weinan Zhang, Yong Yu

Specifically, the masked data of one modality (i. e., tokens or features) has to be recovered with the help of the other modality, which establishes the feature-level interaction and alignment via sufficient mutual information extraction between dual modalities.

Click-Through Rate Prediction Contrastive Learning

GridMM: Grid Memory Map for Vision-and-Language Navigation

1 code implementation ICCV 2023 Zihan Wang, Xiangyang Li, Jiahao Yang, Yeqi Liu, Shuqiang Jiang

Vision-and-language navigation (VLN) enables the agent to navigate to a remote location following the natural language instruction in 3D environments.

Navigate Vision and Language Navigation

How Can Recommender Systems Benefit from Large Language Models: A Survey

1 code implementation9 Jun 2023 Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Hao Zhang, Yong liu, Chuhan Wu, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang

In this paper, we conduct a comprehensive survey on this research direction from the perspective of the whole pipeline in real-world recommender systems.

Ethics Feature Engineering +5

CTRL: Connect Collaborative and Language Model for CTR Prediction

no code implementations5 Jun 2023 Xiangyang Li, Bo Chen, Lu Hou, Ruiming Tang

Both tabular data and converted textual data are regarded as two different modalities and are separately fed into the collaborative CTR model and pre-trained language model.

Click-Through Rate Prediction Language Modelling +1

Plug-and-Play Document Modules for Pre-trained Models

1 code implementation28 May 2023 Chaojun Xiao, Zhengyan Zhang, Xu Han, Chi-Min Chan, Yankai Lin, Zhiyuan Liu, Xiangyang Li, Zhonghua Li, Zhao Cao, Maosong Sun

By inserting document plugins into the backbone PTM for downstream tasks, we can encode a document one time to handle multiple tasks, which is more efficient than conventional encoding-task coupling methods that simultaneously encode documents and input queries using task-specific encoders.

Question Answering

Rethinking Dense Retrieval's Few-Shot Ability

1 code implementation12 Apr 2023 Si Sun, Yida Lu, Shi Yu, Xiangyang Li, Zhonghua Li, Zhao Cao, Zhiyuan Liu, Deiming Ye, Jie Bao

Moreover, the dataset is disjointed into base and novel classes, allowing DR models to be continuously trained on ample data from base classes and a few samples in novel classes.

Retrieval

Contrastive Learning enhanced Author-Style Headline Generation

1 code implementation7 Nov 2022 Hui Liu, Weidong Guo, Yige Chen, Xiangyang Li

In this paper, we propose a novel Seq2Seq model called CLH3G (Contrastive Learning enhanced Historical Headlines based Headline Generation) which can use the historical headlines of the articles that the author wrote in the past to improve the headline generation of current articles.

Contrastive Learning Headline Generation

IntTower: the Next Generation of Two-Tower Model for Pre-Ranking System

2 code implementations18 Oct 2022 Xiangyang Li, Bo Chen, Huifeng Guo, Jingjie Li, Chenxu Zhu, Xiang Long, Sujian Li, Yichao Wang, Wei Guo, Longxia Mao, JinXing Liu, Zhenhua Dong, Ruiming Tang

FE-Block module performs fine-grained and early feature interactions to capture the interactive signals between user and item towers explicitly and CIR module leverages a contrastive interaction regularization to further enhance the interactions implicitly.

TeleMelody: Lyric-to-Melody Generation with a Template-Based Two-Stage Method

1 code implementation20 Sep 2021 Zeqian Ju, Peiling Lu, Xu Tan, Rui Wang, Chen Zhang, Songruoyao Wu, Kejun Zhang, Xiangyang Li, Tao Qin, Tie-Yan Liu

In this paper, we develop TeleMelody, a two-stage lyric-to-melody generation system with music template (e. g., tonality, chord progression, rhythm pattern, and cadence) to bridge the gap between lyrics and melodies (i. e., the system consists of a lyric-to-template module and a template-to-melody module).

Exploring Text-transformers in AAAI 2021 Shared Task: COVID-19 Fake News Detection in English

1 code implementation7 Jan 2021 Xiangyang Li, Yu Xia, Xiang Long, Zheng Li, Sujian Li

In this paper, we describe our system for the AAAI 2021 shared task of COVID-19 Fake News Detection in English, where we achieved the 3rd position with the weighted F1 score of 0. 9859 on the test set.

Fake News Detection Position

Cannot find the paper you are looking for? You can Submit a new open access paper.