28 papers with code • 3 benchmarks • 1 datasets
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.
Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging.
Results show that CodeBERT achieves state-of-the-art performance on both natural language code search and code documentation generation tasks.
The kNN attention pooling layer is a generalization of the Graph Attention Model (GAM), and can be applied to not only graphs but also any set of objects regardless of whether a graph is given or not.
State-of-the-art knowledge base completion (KBC) models predict a score for every known or unknown fact via a latent factorization over entity and relation embeddings.
Automatically annotating column types with knowledge base (KB) concepts is a critical task to gain a basic understanding of web tables.