Search Results for author: Zhenghao Liu

Found 23 papers, 17 papers with code

Text Matching Improves Sequential Recommendation by Reducing Popularity Biases

1 code implementation27 Aug 2023 Zhenghao Liu, Sen Mei, Chenyan Xiong, Xiaohua LI, Shi Yu, Zhiyuan Liu, Yu Gu, Ge Yu

TASTE alleviates the cold start problem by representing long-tail items using full-text modeling and bringing the benefits of pretrained language models to recommendation systems.

Sequential Recommendation Text Matching

Structure-Aware Language Model Pretraining Improves Dense Retrieval on Structured Data

1 code implementation31 May 2023 Xinze Li, Zhenghao Liu, Chenyan Xiong, Shi Yu, Yu Gu, Zhiyuan Liu, Ge Yu

SANTA proposes two pretraining methods to make language models structure-aware and learn effective representations for structured data: 1) Structured Data Alignment, which utilizes the natural alignment relations between structured data and unstructured data for structure-aware pretraining.

Code Search Language Modelling +1

Fusion-in-T5: Unifying Document Ranking Signals for Improved Information Retrieval

no code implementations24 May 2023 Shi Yu, Chenghao Fan, Chenyan Xiong, David Jin, Zhiyuan Liu, Zhenghao Liu

Common IR pipelines are typically cascade systems that may involve multiple rankers and/or fusion models to integrate different information step-by-step.

Document Ranking Information Retrieval +2

CHGNN: A Semi-Supervised Contrastive Hypergraph Learning Network

no code implementations10 Mar 2023 Yumeng Song, Yu Gu, Tianyi Li, Jianzhong Qi, Zhenghao Liu, Christian S. Jensen, Ge Yu

However, recent studies on hypergraph learning that extend graph convolutional networks to hypergraphs cannot learn effectively from features of unlabeled data.

Contrastive Learning Node Classification

Universal Vision-Language Dense Retrieval: Learning A Unified Representation Space for Multi-Modal Retrieval

1 code implementation1 Sep 2022 Zhenghao Liu, Chenyan Xiong, Yuanhuiyi Lv, Zhiyuan Liu, Ge Yu

To learn a unified embedding space for multi-modal retrieval, UniVL-DR proposes two techniques: 1) Universal embedding optimization strategy, which contrastively optimizes the embedding space using the modality-balanced hard negatives; 2) Image verbalization method, which bridges the modality gap between images and texts in the raw data space.

Image Retrieval Open-Domain Question Answering +2

Dimension Reduction for Efficient Dense Retrieval via Conditional Autoencoder

1 code implementation6 May 2022 Zhenghao Liu, Han Zhang, Chenyan Xiong, Zhiyuan Liu, Yu Gu, Xiaohua LI

These embeddings need to be high-dimensional to fit training signals and guarantee the retrieval effectiveness of dense retrievers.

Dimensionality Reduction Information Retrieval +1

P^3 Ranker: Mitigating the Gaps between Pre-training and Ranking Fine-tuning with Prompt-based Learning and Pre-finetuning

1 code implementation4 May 2022 Xiaomeng Hu, Shi Yu, Chenyan Xiong, Zhenghao Liu, Zhiyuan Liu, Ge Yu

In this paper, we identify and study the two mismatches between pre-training and ranking fine-tuning: the training schema gap regarding the differences in training objectives and model architectures, and the task knowledge gap considering the discrepancy between the knowledge needed in ranking and that learned during pre-training.

YACLC: A Chinese Learner Corpus with Multidimensional Annotation

no code implementations30 Dec 2021 Yingying Wang, Cunliang Kong, Liner Yang, Yijun Wang, Xiaorong Lu, Renfen Hu, Shan He, Zhenghao Liu, Yun Chen, Erhong Yang, Maosong Sun

This resource is of great relevance for second language acquisition research, foreign-language teaching, and automatic grammatical error correction.

Grammatical Error Correction Language Acquisition

More Robust Dense Retrieval with Contrastive Dual Learning

1 code implementation16 Jul 2021 Yizhi Li, Zhenghao Liu, Chenyan Xiong, Zhiyuan Liu

With contrastive learning, the dual training object of DANCE learns more tailored representations for queries and documents to keep the embedding space smooth and uniform, thriving on the ranking performance of DANCE on the MS MARCO document retrieval task.

Contrastive Learning Information Retrieval +2

Few-Shot Conversational Dense Retrieval

1 code implementation10 May 2021 Shi Yu, Zhenghao Liu, Chenyan Xiong, Tao Feng, Zhiyuan Liu

In this paper, we present a Conversational Dense Retrieval system, ConvDR, that learns contextualized embeddings for multi-turn conversational queries and retrieves documents solely using embedding dot products.

Conversational Search Retrieval

OpenMatch: An Open Source Library for Neu-IR Research

1 code implementation30 Jan 2021 Zhenghao Liu, Kaitao Zhang, Chenyan Xiong, Zhiyuan Liu, Maosong Sun

OpenMatch is a Python-based library that serves for Neural Information Retrieval (Neu-IR) research.

Document Ranking Information Retrieval +1

Few-Shot Text Ranking with Meta Adapted Synthetic Weak Supervision

1 code implementation ACL 2021 Si Sun, Yingzhuo Qian, Zhenghao Liu, Chenyan Xiong, Kaitao Zhang, Jie Bao, Zhiyuan Liu, Paul Bennett

To democratize the benefits of Neu-IR, this paper presents MetaAdaptRank, a domain adaptive learning method that generalizes Neu-IR models from label-rich source domains to few-shot target domains.

Information Retrieval Learning-To-Rank +1

Capturing Global Informativeness in Open Domain Keyphrase Extraction

2 code implementations28 Apr 2020 Si Sun, Zhenghao Liu, Chenyan Xiong, Zhiyuan Liu, Jie Bao

Open-domain KeyPhrase Extraction (KPE) aims to extract keyphrases from documents without domain or quality restrictions, e. g., web pages with variant domains and qualities.

Chunking Informativeness +1

Selective Weak Supervision for Neural Information Retrieval

1 code implementation28 Jan 2020 Kaitao Zhang, Chenyan Xiong, Zhenghao Liu, Zhiyuan Liu

This paper democratizes neural information retrieval to scenarios where large scale relevance training signals are not available.

Information Retrieval Learning-To-Rank +1

Fine-grained Fact Verification with Kernel Graph Attention Network

1 code implementation ACL 2020 Zhenghao Liu, Chenyan Xiong, Maosong Sun, Zhiyuan Liu

Fact Verification requires fine-grained natural language inference capability that finds subtle clues to identify the syntactical and semantically correct but not well-supported claims.

Fact Verification Graph Attention +1

Explore Entity Embedding Effectiveness in Entity Retrieval

no code implementations28 Aug 2019 Zhenghao Liu, Chenyan Xiong, Maosong Sun, Zhiyuan Liu

Entity embedding learns lots of semantic information from the knowledge graph and represents entities with a low-dimensional representation, which provides an opportunity to establish interactions between query related entities and candidate entities for entity retrieval.

Entity Retrieval Learning-To-Rank +1

Cannot find the paper you are looking for? You can Submit a new open access paper.