Code Translation

16 papers with code • 2 benchmarks • 4 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Code Translation models and implementations

Most implemented papers

Unsupervised Translation of Programming Languages

facebookresearch/CodeGen NeurIPS 2020

We train our model on source code from open source GitHub projects, and show that it can translate functions between C++, Java, and Python with high accuracy.

CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation

microsoft/CodeXGLUE 9 Feb 2021

Benchmark datasets have a significant impact on accelerating research in programming language tasks.

Composed Fine-Tuning: Freezing Pre-Trained Denoising Autoencoders for Improved Generalization

p-lambda/unlabeled_outputs 29 Jun 2020

Empirically, we show that composed fine-tuning improves over standard fine-tuning on two pseudocode-to-code translation datasets (3% and 6% relative).

DOBF: A Deobfuscation Pre-Training Objective for Programming Languages

facebookresearch/CodeGen NeurIPS 2021

Recent advances in self-supervised learning have dramatically improved the state of the art on a wide variety of tasks.

CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation

salesforce/codet5 EMNLP 2021

We present CodeT5, a unified pre-trained encoder-decoder Transformer model that better leverages the code semantics conveyed from the developer-assigned identifiers.

GraphCodeBERT: Pre-training Code Representations with Data Flow

microsoft/CodeBERT ICLR 2021

Instead of taking syntactic-level structure of code like abstract syntax tree (AST), we use data flow in the pre-training stage, which is a semantic-level structure of code that encodes the relation of "where-the-value-comes-from" between variables.

Unified Pre-training for Program Understanding and Generation

wasiahmad/PLBART NAACL 2021

Experiments on code summarization in the English language, code generation, and code translation in seven programming languages show that PLBART outperforms or rivals state-of-the-art models.

CodeNet: A Large-Scale AI for Code Dataset for Learning a Diversity of Coding Tasks

IBM/Project_CodeNet 25 May 2021

In addition to its large scale, CodeNet has a rich set of high-quality annotations to benchmark and help accelerate research in AI techniques for a variety of critical coding tasks, including code similarity and classification, code translation between a large variety of programming languages, and code performance (runtime and memory) improvement techniques.

Leveraging Automated Unit Tests for Unsupervised Code Translation

facebookresearch/CodeGen ICLR 2022

With little to no parallel data available for programming languages, unsupervised methods are well-suited to source code translation.