Search Results for author: Wenqi Shi

Found 12 papers, 7 papers with code

MedAdapter: Efficient Test-Time Adaptation of Large Language Models towards Medical Reasoning

1 code implementation5 May 2024 Wenqi Shi, ran Xu, Yuchen Zhuang, Yue Yu, Haotian Sun, Hang Wu, Carl Yang, May D. Wang

Faced with the challenges of balancing model performance, computational resources, and data privacy, MedAdapter provides an efficient, privacy-preserving, cost-effective, and transparent solution for adapting LLMs to the biomedical domain.

Privacy Preserving Test-time Adaptation

BMRetriever: Tuning Large Language Models as Better Biomedical Text Retrievers

1 code implementation29 Apr 2024 ran Xu, Wenqi Shi, Yue Yu, Yuchen Zhuang, Yanqiao Zhu, May D. Wang, Joyce C. Ho, Chao Zhang, Carl Yang

Developing effective biomedical retrieval models is important for excelling at knowledge-intensive biomedical tasks but still challenging due to the deficiency of sufficient publicly annotated biomedical data and computational resources.

Retrieval Unsupervised Pre-training

RAM-EHR: Retrieval Augmentation Meets Clinical Predictions on Electronic Health Records

1 code implementation25 Feb 2024 ran Xu, Wenqi Shi, Yue Yu, Yuchen Zhuang, Bowen Jin, May D. Wang, Joyce C. Ho, Carl Yang

We present RAM-EHR, a Retrieval AugMentation pipeline to improve clinical predictions on Electronic Health Records (EHRs).

Retrieval

EHRAgent: Code Empowers Large Language Models for Few-shot Complex Tabular Reasoning on Electronic Health Records

1 code implementation13 Jan 2024 Wenqi Shi, ran Xu, Yuchen Zhuang, Yue Yu, Jieyu Zhang, Hang Wu, Yuanda Zhu, Joyce Ho, Carl Yang, May D. Wang

Large language models (LLMs) have demonstrated exceptional capabilities in planning and tool utilization as autonomous agents, but few have been developed for medical problem-solving.

Code Generation Few-Shot Learning +1

Knowledge-Infused Prompting: Assessing and Advancing Clinical Text Data Generation with Large Language Models

1 code implementation1 Nov 2023 ran Xu, Hejie Cui, Yue Yu, Xuan Kan, Wenqi Shi, Yuchen Zhuang, Wei Jin, Joyce Ho, Carl Yang

Clinical natural language processing requires methods that can address domain-specific challenges, such as complex medical terminology and clinical contexts.

Clinical Knowledge Diversity +2

Autonomous Soft Tissue Retraction Using Demonstration-Guided Reinforcement Learning

no code implementations2 Sep 2023 Amritpal Singh, Wenqi Shi, May D Wang

Furthermore, we investigate the soft tissue interactions facilitated by the patient-side manipulator of the DaVinci surgical robot.

reinforcement-learning Reinforcement Learning +1

Nonlinear Causal Discovery via Kernel Anchor Regression

1 code implementation30 Oct 2022 Wenqi Shi, Wenkai Xu

Anchor regression has been developed to address this problem for a large class of causal graphical models, though the relationships between the variables are assumed to be linear.

Causal Discovery regression

Multi-user Co-inference with Batch Processing Capable Edge Server

no code implementations3 Jun 2022 Wenqi Shi, Sheng Zhou, Zhisheng Niu, Miao Jiang, Lu Geng

To deal with the coupled offloading and scheduling introduced by concurrent batch processing, we first consider an offline problem with a constant edge inference latency and the same latency constraint.

Scheduling

Explainable Artificial Intelligence Methods in Combating Pandemics: A Systematic Review

no code implementations23 Dec 2021 Felipe Giuste, Wenqi Shi, Yuanda Zhu, Tarun Naren, Monica Isgut, Ying Sha, Li Tong, Mitali Gupte, May D. Wang

This systematic review examines the use of Explainable Artificial Intelligence (XAI) during the pandemic and how its use could overcome barriers to real-world success.

Decision Making Experimental Design +2

Joint Device Scheduling and Resource Allocation for Latency Constrained Wireless Federated Learning

no code implementations14 Jul 2020 Wenqi Shi, Sheng Zhou, Zhisheng Niu, Miao Jiang, Lu Geng

Then, a greedy device scheduling algorithm is introduced, which in each step selects the device consuming the least updating time obtained by the optimal bandwidth allocation, until the lower bound begins to increase, meaning that scheduling more devices will degrade the model accuracy.

Federated Learning Scheduling

Device Scheduling with Fast Convergence for Wireless Federated Learning

no code implementations3 Nov 2019 Wenqi Shi, Sheng Zhou, Zhisheng Niu

In each iteration of FL (called round), the edge devices update local models based on their own data and contribute to the global training by uploading the model updates via wireless channels.

Federated Learning Scheduling

Improving Device-Edge Cooperative Inference of Deep Learning via 2-Step Pruning

1 code implementation8 Mar 2019 Wenqi Shi, Yunzhong Hou, Sheng Zhou, Zhisheng Niu, Yang Zhang, Lu Geng

Since the output data size of a DNN layer can be larger than that of the raw data, offloading intermediate data between layers can suffer from high transmission latency under limited wireless bandwidth.

Cannot find the paper you are looking for? You can Submit a new open access paper.