Exception type

2 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Learning and Evaluating Contextual Embedding of Source Code

google-research/google-research ICML 2020

We fine-tune CuBERT on our benchmark tasks, and compare the resulting models to different variants of Word2Vec token embeddings, BiLSTM and Transformer models, as well as published state-of-the-art models, showing that CuBERT outperforms them all, even with shorter training, and with fewer labeled examples.

CodeTrek: Flexible Modeling of Code using an Extensible Relational Representation

ppashakhanloo/CodeTrek ICLR 2022

Designing a suitable representation for code-reasoning tasks is challenging in aspects such as the kinds of program information to model, how to combine them, and how much context to consider.