Search Results for author: Zehan Li

Found 10 papers, 5 papers with code

Advancing GenAI Assisted Programming--A Comparative Study on Prompt Efficiency and Code Quality Between GPT-4 and GLM-4

no code implementations20 Feb 2024 Angus Yang, Zehan Li, Jie Li

Our GenAI Coding Workshop highlights the effectiveness and accessibility of the prompting methodology developed in this study.

Code Generation

Large Language Models in Mental Health Care: a Scoping Review

no code implementations1 Jan 2024 Yining Hua, Fenglin Liu, Kailai Yang, Zehan Li, Yi-han Sheu, Peilin Zhou, Lauren V. Moran, Sophia Ananiadou, Andrew Beam

Objective: The growing use of large language models (LLMs) stimulates a need for a comprehensive review of their applications and outcomes in mental health care contexts.

Language Models are Universal Embedders

1 code implementation12 Oct 2023 Xin Zhang, Zehan Li, Yanzhao Zhang, Dingkun Long, Pengjun Xie, Meishan Zhang, Min Zhang

As such cases span from English to other natural or programming languages, from retrieval to classification and beyond, it is desirable to build a unified embedding model rather than dedicated ones for each scenario.

Code Search Language Modelling +2

Challenging Decoder helps in Masked Auto-Encoder Pre-training for Dense Passage Retrieval

no code implementations22 May 2023 Zehan Li, Yanzhao Zhang, Dingkun Long, Pengjun Xie

Recently, various studies have been directed towards exploring dense passage retrieval techniques employing pre-trained language models, among which the masked auto-encoder (MAE) pre-training architecture has emerged as the most promising.

Passage Retrieval Retrieval

Learning Diverse Document Representations with Deep Query Interactions for Dense Retrieval

1 code implementation8 Aug 2022 Zehan Li, Nan Yang, Liang Wang, Furu Wei

In this paper, we propose a new dense retrieval model which learns diverse document representations with deep query interactions.

Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.