Search Results for author: Mingjie Tang

Found 7 papers, 4 papers with code

Couler: Unified Machine Learning Workflow Optimization in Cloud

1 code implementation12 Mar 2024 Xiaoda Wang, Yuan Tang, Tengda Guo, Bo Sang, Jingji Wu, Jian Sha, Ke Zhang, Jiang Qian, Mingjie Tang

This variety poses a challenge for end-users in terms of mastering different engine APIs.

ASPEN: High-Throughput LoRA Fine-Tuning of Large Language Models with a Single GPU

1 code implementation5 Dec 2023 Zhengmao Ye, Dengchun Li, Jingqi Tian, Tingfeng Lan, Jie Zuo, Lei Duan, Hui Lu, Yexi Jiang, Jian Sha, Ke Zhang, Mingjie Tang

Transformer-based large language models (LLMs) have demonstrated outstanding performance across diverse domains, particularly when fine-turned for specific domains.

Large Language Model Scheduling

MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA based Mixture of Experts

1 code implementation22 Apr 2024 Dengchun Li, Yingzi Ma, Naizheng Wang, Zhiyuan Cheng, Lei Duan, Jie Zuo, Cal Yang, Mingjie Tang

Unlike other LoRA based MoE methods, MixLoRA enhances model performance by utilizing independently configurable attention-layer LoRA adapters, supporting the use of LoRA and its variants for the construction of experts, and applying auxiliary load balance loss to address the imbalance problem of the router.

DLRover: An Elastic Deep Training Extension with Auto Job Resource Recommendation

no code implementations4 Apr 2023 Qinlong Wang, Bo Sang, HaiTao Zhang, Mingjie Tang, Ke Zhang

The resource configuration of a job deeply affect this job's performance (e. g., training throughput, resource utilization, and completion rate).

Cannot find the paper you are looking for? You can Submit a new open access paper.