DOBF: A Deobfuscation Pre-Training Objective for Programming Languages

Recent advances in self-supervised learning have dramatically improved the state of the art on a wide variety of tasks. However, research in language model pre-training has mostly focused on natural languages, and it is unclear whether models like BERT and its variants provide the best pre-training when applied to other modalities, such as source code... (read more)

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
Residual Connection
Skip Connections
Weight Decay
Regularization
Multi-Head Attention
Attention Modules
GELU
Activation Functions
Attention Dropout
Regularization
Layer Normalization
Normalization
WordPiece
Subword Segmentation
Dense Connections
Feedforward Networks
Adam
Stochastic Optimization
Linear Warmup With Linear Decay
Learning Rate Schedules
Scaled Dot-Product Attention
Attention Mechanisms
Softmax
Output Functions
Dropout
Regularization
BERT
Language Models