Search Results for author: Junwei Liu

Found 7 papers, 6 papers with code

Large Language Model-Based Agents for Software Engineering: A Survey

1 code implementation4 Sep 2024 Junwei Liu, Kaixin Wang, Yixuan Chen, Xin Peng, Zhenpeng Chen, Lingming Zhang, Yiling Lou

The recent advance in Large Language Models (LLMs) has shaped a new paradigm of AI agents, i. e., LLM-based agents.

AI Agent Language Modelling +1

A Refer-and-Ground Multimodal Large Language Model for Biomedicine

1 code implementation26 Jun 2024 Xiaoshuang Huang, Haifeng Huang, Lingdong Shen, Yehui Yang, Fangxin Shang, Junwei Liu, Jia Liu

Additionally, we introduce a Refer-and-Ground Multimodal Large Language Model for Biomedicine (BiRD) by using this dataset and multi-task instruction learning.

Language Modelling Large Language Model +1

SynFundus-1M: A High-quality Million-scale Synthetic fundus images Dataset with Fifteen Types of Annotation

1 code implementation1 Dec 2023 Fangxin Shang, Jie Fu, Yehui Yang, Haifeng Huang, Junwei Liu, Lei Ma

Large-scale public datasets with high-quality annotations are rarely available for intelligent medical imaging research, due to data privacy concerns and the cost of annotations.

Denoising

ClassEval: A Manually-Crafted Benchmark for Evaluating LLMs on Class-level Code Generation

1 code implementation3 Aug 2023 Xueying Du, Mingwei Liu, Kaixin Wang, Hanlin Wang, Junwei Liu, Yixuan Chen, Jiayi Feng, Chaofeng Sha, Xin Peng, Yiling Lou

Third, we find that generating the entire class all at once (i. e. holistic generation strategy) is the best generation strategy only for GPT-4 and GPT-3. 5, while method-by-method generation (i. e. incremental and compositional) is better strategies for the other models with limited ability of understanding long instructions and utilizing the middle information.

Class-level Code Generation HumanEval

Evaluating Instruction-Tuned Large Language Models on Code Comprehension and Generation

no code implementations2 Aug 2023 Zhiqiang Yuan, Junwei Liu, Qiancheng Zi, Mingwei Liu, Xin Peng, Yiling Lou

First, for the zero-shot setting, instructed LLMs are very competitive on code comprehension and generation tasks and sometimes even better than small SOTA models specifically fine-tuned on each downstream task.

Fast Online Hashing with Multi-Label Projection

1 code implementation3 Dec 2022 Wenzhe Jia, Yuan Cao, Junwei Liu, Jie Gui

When a new query arrives, only the binary codes of the corresponding potential neighbors are updated.

Retrieval

Robust Retinal Vessel Segmentation from a Data Augmentation Perspective

1 code implementation31 Jul 2020 Xu Sun, Huihui Fang, Yehui Yang, Dongwei Zhu, Lei Wang, Junwei Liu, Yanwu Xu

In this paper, we propose two new data augmentation modules, namely, channel-wise random Gamma correction and channel-wise random vessel augmentation.

Data Augmentation Retinal Vessel Segmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.