Search Results for author: Chong Shen

Found 1 papers, 1 papers with code

Yuan 1.0: Large-Scale Pre-trained Language Model in Zero-Shot and Few-Shot Learning

1 code implementation10 Oct 2021 Shaohua Wu, Xudong Zhao, Tong Yu, Rongguo Zhang, Chong Shen, Hongli Liu, Feng Li, Hong Zhu, Jiangang Luo, Liang Xu, Xuanwei Zhang

With this method, Yuan 1. 0, the current largest singleton language model with 245B parameters, achieves excellent performance on thousands GPUs during training, and the state-of-the-art results on NLP tasks.

Few-Shot Learning Language Modelling +1

Cannot find the paper you are looking for? You can Submit a new open access paper.