1 code implementation • 25 Mar 2024 • Zehan Li, Jianfei Zhang, Chuantao Yin, Yuanxin Ouyang, Wenge Rong
Retrieval-based code question answering seeks to match user queries in natural language to relevant code snippets.
no code implementations • 20 Feb 2024 • Angus Yang, Zehan Li, Jie Li
Our GenAI Coding Workshop highlights the effectiveness and accessibility of the prompting methodology developed in this study.
no code implementations • 1 Jan 2024 • Yining Hua, Fenglin Liu, Kailai Yang, Zehan Li, Yi-han Sheu, Peilin Zhou, Lauren V. Moran, Sophia Ananiadou, Andrew Beam
Objective: The growing use of large language models (LLMs) stimulates a need for a comprehensive review of their applications and outcomes in mental health care contexts.
1 code implementation • 9 Nov 2023 • Yanzhao Zhang, Dingkun Long, Zehan Li, Pengjun Xie
Pre-trained language models (PLMs) have recently shown great success in text representation field.
1 code implementation • 12 Oct 2023 • Xin Zhang, Zehan Li, Yanzhao Zhang, Dingkun Long, Pengjun Xie, Meishan Zhang, Min Zhang
As such cases span from English to other natural or programming languages, from retrieval to classification and beyond, it is desirable to build a unified embedding model rather than dedicated ones for each scenario.
no code implementations • 7 Aug 2023 • Zehan Li, Xin Zhang, Yanzhao Zhang, Dingkun Long, Pengjun Xie, Meishan Zhang
We present GTE, a general-purpose text embedding model trained with multi-stage contrastive learning.
no code implementations • 22 May 2023 • Zehan Li, Yanzhao Zhang, Dingkun Long, Pengjun Xie
Recently, various studies have been directed towards exploring dense passage retrieval techniques employing pre-trained language models, among which the masked auto-encoder (MAE) pre-training architecture has emerged as the most promising.
1 code implementation • 29 Mar 2023 • Yan Hu, Qingyu Chen, Jingcheng Du, Xueqing Peng, Vipina Kuttichi Keloth, Xu Zuo, Yujia Zhou, Zehan Li, Xiaoqian Jiang, Zhiyong Lu, Kirk Roberts, Hua Xu
Results: Using baseline prompts, GPT-3. 5 and GPT-4 achieved relaxed F1 scores of 0. 634, 0. 804 for MTSamples, and 0. 301, 0. 593 for VAERS.
1 code implementation • 8 Aug 2022 • Zehan Li, Nan Yang, Liang Wang, Furu Wei
In this paper, we propose a new dense retrieval model which learns diverse document representations with deep query interactions.
no code implementations • 6 Jul 2022 • Zehan Li, Haoran Miao, Keqi Deng, Gaofeng Cheng, Sanli Tian, Ta Li, Yonghong Yan
Firstly, we introduce a real-time encoder states revision strategy to modify previous states.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2