no code implementations • 8 Feb 2023 • Changan Niu, Chuanyi Li, Vincent Ng, Bin Luo
Despite the recent advances showing that a model pre-trained on large-scale source code data is able to gain appreciable generalization capability, it still requires a sizeable amount of data on the target task for fine-tuning.
no code implementations • 24 May 2022 • Changan Niu, Chuanyi Li, Bin Luo, Vincent Ng
In particular, the development and use of pre-trained models of source code has enabled state-of-the-art results to be achieved on a wide variety of SE tasks.