Search Results for author: Lei Lu

Found 12 papers, 2 papers with code

All-in-One Tuning and Structural Pruning for Domain-Specific LLMs

no code implementations19 Dec 2024 Lei Lu, Zhepeng Wang, Runxue Bao, Mengbing Wang, Fangyi Li, Yawen Wu, Weiwen Jiang, Jie Xu, Yanzhi Wang, Shangqian Gao

Therefore, such a combination of the pruning decisions and the finetuned weights may be suboptimal, leading to non-negligible performance degradation.

Fully Open Source Moxin-7B Technical Report

1 code implementation8 Dec 2024 Pu Zhao, Xuan Shen, Zhenglun Kong, Yixin Shen, Sung-En Chang, Timothy Rupprecht, Lei Lu, Enfu Nan, Changdi Yang, Yumei He, Xingchen Xu, Yu Huang, Wei Wang, Yue Chen, Yong He, Yanzhi Wang

Recently, Large Language Models (LLMs) have undergone a significant transformation, marked by a rapid rise in both their popularity and capabilities.

Task Adaptive Feature Distribution Based Network for Few-shot Fine-grained Target Classification

no code implementations13 Oct 2024 Ping Li, Hongbo Wang, Lei Lu

Metric-based few-shot fine-grained classification has shown promise due to its simplicity and efficiency.

Incremental Learning

Digital Avatars: Framework Development and Their Evaluation

no code implementations7 Aug 2024 Timothy Rupprecht, Sung-En Chang, Yushu Wu, Lei Lu, Enfu Nan, Chih-hsiang Li, Caiyue Lai, Zhimin Li, Zhijun Hu, Yumei He, David Kaeli, Yanzhi Wang

To better quantify how our prompting strategy affects anthropomorphic features like humor, authenticity, and favorability we present Crowd Vote - an adaptation of Crowd Score that allows for judges to elect a large language model (LLM) candidate over competitors answering the same or similar prompts.

Language Modeling Language Modelling +1

EdgeQAT: Entropy and Distribution Guided Quantization-Aware Training for the Acceleration of Lightweight LLMs on the Edge

1 code implementation16 Feb 2024 Xuan Shen, Zhenglun Kong, Changdi Yang, Zhaoyang Han, Lei Lu, Peiyan Dong, Cheng Lyu, Chih-hsiang Li, Xuehang Guo, Zhihao Shu, Wei Niu, Miriam Leeser, Pu Zhao, Yanzhi Wang

In this paper, we propose EdgeQAT, the Entropy and Distribution Guided QAT for the optimization of lightweight LLMs to achieve inference acceleration on Edge devices.

Quantization

Agile-Quant: Activation-Guided Quantization for Faster Inference of LLMs on the Edge

no code implementations9 Dec 2023 Xuan Shen, Peiyan Dong, Lei Lu, Zhenglun Kong, Zhengang Li, Ming Lin, Chao Wu, Yanzhi Wang

Recent works show that 8-bit or lower weight quantization is feasible with minimal impact on end-to-end task performance, while the activation is still not quantized.

Language Modeling Language Modelling +1

A Two-Dimensional Deep Network for RF-based Drone Detection and Identification Towards Secure Coverage Extension

no code implementations26 Aug 2023 Zixiao Zhao, Qinghe Du, Xiang Yao, Lei Lu, Shijiao Zhang

As drones become increasingly prevalent in human life, they also raises security concerns such as unauthorized access and control, as well as collisions and interference with manned aircraft.

Semi-Supervised Learning for Multi-Label Cardiovascular Diseases Prediction:A Multi-Dataset Study

no code implementations18 Jun 2023 Rushuang Zhou, Lei Lu, Zijun Liu, Ting Xiang, Zhen Liang, David A. Clifton, Yining Dong, Yuan-Ting Zhang

However, the label scarcity problem, the co-occurrence of multiple CVDs and the poor performance on unseen datasets greatly hinder the widespread application of deep learning-based models.

Data Augmentation Electrocardiography (ECG) +2

The financial value of the within-government political network: Evidence from Chinese municipal corporate bonds

no code implementations4 Jan 2022 Jaehyuk Choi, Lei Lu, Heungju Park, Sungbin Sohn

This paper examines the effect of the political network of Chinese municipal leaders on the pricing of municipal corporate bonds.

Cannot find the paper you are looking for? You can Submit a new open access paper.