Search Results for author: Xueqing Peng

Found 10 papers, 5 papers with code

OrdRankBen: A Novel Ranking Benchmark for Ordinal Relevance in NLP

no code implementations2 Mar 2025 Yan Wang, Lingfei Qian, Xueqing Peng, Jimin Huang, Dongji Feng

The evaluation of ranking tasks remains a significant challenge in natural language processing (NLP), particularly due to the lack of direct labels for results in real-world scenarios.

Fino1: On the Transferability of Reasoning Enhanced LLMs to Finance

1 code implementation12 Feb 2025 Lingfei Qian, Weipeng Zhou, Yan Wang, Xueqing Peng, Han Yi, Jimin Huang, Qianqian Xie, Jianyun Nie

While large language models (LLMs) have shown strong general reasoning capabilities, their effectiveness in financial reasoning, which is crucial for real-world financial applications remains underexplored.

Benchmarking Long-Context Understanding

INVESTORBENCH: A Benchmark for Financial Decision-Making Tasks with LLM-based Agent

no code implementations24 Dec 2024 Haohang Li, Yupeng Cao, Yangyang Yu, Shashidhar Reddy Javaji, Zhiyang Deng, Yueru He, Yuechen Jiang, Zining Zhu, Koduvayur Subbalakshmi, Guojun Xiong, Jimin Huang, Lingfei Qian, Xueqing Peng, Qianqian Xie, Jordan W. Suchow

Despite this progress, the field currently encounters two main challenges: (1) the lack of a comprehensive LLM agent framework adaptable to a variety of financial tasks, and (2) the absence of standardized benchmarks and consistent datasets for assessing agent performance.

Decision Making Language Modeling +2

Relation Extraction Using Large Language Models: A Case Study on Acupuncture Point Locations

no code implementations8 Apr 2024 Yiming Li, Xueqing Peng, Jianfu Li, Xu Zuo, Suyuan Peng, Donghong Pei, Cui Tao, Hua Xu, Na Hong

This study underscores the effectiveness of LLMs like GPT in extracting relations related to acupoint locations, with implications for accurately modeling acupuncture knowledge and promoting standard implementation in acupuncture training and practice.

Relation Relation Extraction

Me LLaMA: Foundation Large Language Models for Medical Applications

1 code implementation20 Feb 2024 Qianqian Xie, Qingyu Chen, Aokun Chen, Cheng Peng, Yan Hu, Fongci Lin, Xueqing Peng, Jimin Huang, Jeffrey Zhang, Vipina Keloth, Xinyu Zhou, Lingfei Qian, Huan He, Dennis Shung, Lucila Ohno-Machado, Yonghui Wu, Hua Xu, Jiang Bian

This work underscores the importance of domain-specific data in developing medical LLMs and addresses the high computational costs involved in training, highlighting a balance between pre-training and fine-tuning strategies.

Few-Shot Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.