Search Results for author: Xiaonan Nie

Found 11 papers, 6 papers with code

DataSculpt: Crafting Data Landscapes for Long-Context LLMs through Multi-Objective Partitioning

3 code implementations2 Sep 2024 Keer Lu, Xiaonan Nie, Zheng Liang, Da Pan, Shusen Zhang, Keshi Zhao, WeiPeng Chen, Zenan Zhou, Guosheng Dong, Bin Cui, Wentao Zhang

Through extensive experimental analysis, we identified three key challenges in designing effective data management strategies that enable the model to achieve long-context capability without sacrificing performance in other tasks: (1) a shortage of long documents across multiple domains, (2) effective construction of context windows, and (3) efficient organization of large-scale datasets.

Code Completion Combinatorial Optimization +5

MEMO: Fine-grained Tensor Management For Ultra-long Context LLM Training

no code implementations16 Jul 2024 Pinxue Zhao, Hailin Zhang, Fangcheng Fu, Xiaonan Nie, Qibin Liu, Fang Yang, Yuanbo Peng, Dian Jiao, Shuaipeng Li, Jinbao Xue, Yangyu Tao, Bin Cui

By leveraging fine-grained activation memory management, MEMO facilitates efficient training of 7B LLM with 1 million sequence length on just 8 A800 GPUs, achieving an MFU of 52. 30%.

Management

Clover: Regressive Lightweight Speculative Decoding with Sequential Knowledge

1 code implementation1 May 2024 Bin Xiao, Chunan Shi, Xiaonan Nie, Fan Yang, Xiangwei Deng, Lei Su, WeiPeng Chen, Bin Cui

Consequently, the GPU spends most of its time on memory transfer instead of computation.

Improving Automatic Parallel Training via Balanced Memory Workload Optimization

1 code implementation5 Jul 2023 Yujie Wang, Youhe Jiang, Xupeng Miao, Fangcheng Fu, Shenhan Zhu, Xiaonan Nie, Yaofeng Tu, Bin Cui

Transformer models have emerged as the leading approach for achieving state-of-the-art performance across various application domains, serving as the foundation for advanced large-scale deep learning (DL) models.

Navigate

FlexMoE: Scaling Large-scale Sparse Pre-trained Model Training via Dynamic Device Placement

no code implementations8 Apr 2023 Xiaonan Nie, Xupeng Miao, Zilong Wang, Zichao Yang, Jilong Xue, Lingxiao Ma, Gang Cao, Bin Cui

We first present an empirical analysis on the problems and opportunities of training MoE models, which motivates us to overcome the routing imbalance and fluctuation problems by a dynamic expert management and device placement mechanism.

Scheduling

Angel-PTM: A Scalable and Economical Large-scale Pre-training System in Tencent

no code implementations6 Mar 2023 Xiaonan Nie, Yi Liu, Fangcheng Fu, Jinbao Xue, Dian Jiao, Xupeng Miao, Yangyu Tao, Bin Cui

Recent years have witnessed the unprecedented achievements of large-scale pre-trained models, especially the Transformer models.

Management Scheduling

Galvatron: Efficient Transformer Training over Multiple GPUs Using Automatic Parallelism

2 code implementations25 Nov 2022 Xupeng Miao, Yujie Wang, Youhe Jiang, Chunan Shi, Xiaonan Nie, Hailin Zhang, Bin Cui

Transformer models have achieved state-of-the-art performance on various domains of applications and gradually becomes the foundations of the advanced large deep learning (DL) models.

Cannot find the paper you are looking for? You can Submit a new open access paper.