Search Results for author: Haizhen Huang

Found 14 papers, 8 papers with code

Leveraging Bidding Graphs for Advertiser-Aware Relevance Modeling in Sponsored Search

no code implementations Findings (EMNLP) 2021 Shuxian Bi, Chaozhuo Li, Xiao Han, Zheng Liu, Xing Xie, Haizhen Huang, Zengxuan Wen

As the fundamental basis of sponsored search, relevance modeling has attracted increasing attention due to the tremendous practical value.

Marketing

ResLoRA: Identity Residual Mapping in Low-Rank Adaption

1 code implementation28 Feb 2024 Shuhua Shi, Shaohan Huang, Minghui Song, Zhoujun Li, Zihan Zhang, Haizhen Huang, Furu Wei, Weiwei Deng, Feng Sun, Qi Zhang

As one of the most popular parameter-efficient fine-tuning (PEFT) methods, low-rank adaptation (LoRA) is commonly applied to fine-tune large language models (LLMs).

Text Diffusion with Reinforced Conditioning

no code implementations19 Feb 2024 Yuxuan Liu, Tianchi Yang, Shaohan Huang, Zihan Zhang, Haizhen Huang, Furu Wei, Weiwei Deng, Feng Sun, Qi Zhang

Diffusion models have demonstrated exceptional capability in generating high-quality images, videos, and audio.

Improving Domain Adaptation through Extended-Text Reading Comprehension

1 code implementation14 Jan 2024 Ting Jiang, Shaohan Huang, Shengyue Luo, Zihan Zhang, Haizhen Huang, Furu Wei, Weiwei Deng, Feng Sun, Qi Zhang, Deqing Wang, Fuzhen Zhuang

To enhance the domain-specific capabilities of large language models, continued pre-training on a domain-specific corpus is a prevalent method.

Clustering Domain Adaptation +1

Democratizing Reasoning Ability: Tailored Learning from Large Language Model

1 code implementation20 Oct 2023 Zhaoyang Wang, Shaohan Huang, Yuxuan Liu, Jiahai Wang, Minghui Song, Zihan Zhang, Haizhen Huang, Furu Wei, Weiwei Deng, Feng Sun, Qi Zhang

In this paper, we propose a tailored learning approach to distill such reasoning ability to smaller LMs to facilitate the democratization of the exclusive reasoning ability.

Instruction Following Language Modelling +1

Auto Search Indexer for End-to-End Document Retrieval

no code implementations19 Oct 2023 Tianchi Yang, Minghui Song, Zihan Zhang, Haizhen Huang, Weiwei Deng, Feng Sun, Qi Zhang

Generative retrieval, which is a new advanced paradigm for document retrieval, has recently attracted research interests, since it encodes all documents into the model and directly generates the retrieved documents.

Retrieval

Calibrating LLM-Based Evaluator

no code implementations23 Sep 2023 Yuxuan Liu, Tianchi Yang, Shaohan Huang, Zihan Zhang, Haizhen Huang, Furu Wei, Weiwei Deng, Feng Sun, Qi Zhang

Recent advancements in large language models (LLMs) on language modeling and emergent capabilities make them a promising reference-free evaluator of natural language generation quality, and a competent alternative to human evaluation.

In-Context Learning Language Modelling +1

Towards Better Entity Linking with Multi-View Enhanced Distillation

1 code implementation27 May 2023 Yi Liu, Yuan Tian, Jianxun Lian, Xinlong Wang, Yanan Cao, Fang Fang, Wen Zhang, Haizhen Huang, Denvy Deng, Qi Zhang

Aiming at learning entity representations that can match divergent mentions, this paper proposes a Multi-View Enhanced Distillation (MVD) framework, which can effectively transfer knowledge of multiple fine-grained and mention-relevant parts within entities from cross-encoders to dual-encoders.

Entity Linking Knowledge Distillation +1

Dual-Alignment Pre-training for Cross-lingual Sentence Embedding

1 code implementation16 May 2023 Ziheng Li, Shaohan Huang, Zihan Zhang, Zhi-Hong Deng, Qiang Lou, Haizhen Huang, Jian Jiao, Furu Wei, Weiwei Deng, Qi Zhang

Recent studies have shown that dual encoder models trained with the sentence-level translation ranking task are effective methods for cross-lingual sentence embedding.

Language Modelling Sentence +3

Pre-training Language Model as a Multi-perspective Course Learner

no code implementations6 May 2023 Beiduo Chen, Shaohan Huang, Zihan Zhang, Wu Guo, ZhenHua Ling, Haizhen Huang, Furu Wei, Weiwei Deng, Qi Zhang

Besides, two self-correction courses are proposed to bridge the chasm between the two encoders by creating a "correction notebook" for secondary-supervision.

Language Modelling Masked Language Modeling

Improving Non-autoregressive Generation with Mixup Training

1 code implementation21 Oct 2021 Ting Jiang, Shaohan Huang, Zihan Zhang, Deqing Wang, Fuzhen Zhuang, Furu Wei, Haizhen Huang, Liangjie Zhang, Qi Zhang

While pre-trained language models have achieved great success on various natural language understanding tasks, how to effectively leverage them into non-autoregressive generation tasks remains a challenge.

Natural Language Understanding Paraphrase Generation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.