Code Translation

39 papers with code • 2 benchmarks • 8 datasets

Code translation is the process of converting code written in one programming language to another programming language while maintaining the same functionality. This process is also known as code conversion, source-to-source translation, or transpilation. Code translation is often performed when developers want to take advantage of new programming languages, improve code performance, or maintain legacy systems. Some common examples include translating code from Python to Java, or from JavaScript to TypeScript.

Libraries

Use these libraries to find Code Translation models and implementations

Most implemented papers

Unsupervised Translation of Programming Languages

facebookresearch/CodeGen NeurIPS 2020

We train our model on source code from open source GitHub projects, and show that it can translate functions between C++, Java, and Python with high accuracy.

CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation

salesforce/codet5 EMNLP 2021

We present CodeT5, a unified pre-trained encoder-decoder Transformer model that better leverages the code semantics conveyed from the developer-assigned identifiers.

CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation

microsoft/CodeXGLUE 9 Feb 2021

Benchmark datasets have a significant impact on accelerating research in programming language tasks.

Composed Fine-Tuning: Freezing Pre-Trained Denoising Autoencoders for Improved Generalization

p-lambda/composed_finetuning 29 Jun 2020

Empirically, we show that composed fine-tuning improves over standard fine-tuning on two pseudocode-to-code translation datasets (3% and 6% relative).

CodeBLEU: a Method for Automatic Evaluation of Code Synthesis

THUDM/CodeGeeX 22 Sep 2020

Evaluation metrics play a vital role in the growth of an area as it defines the standard of distinguishing between good and bad models.

DOBF: A Deobfuscation Pre-Training Objective for Programming Languages

facebookresearch/CodeGen NeurIPS 2021

Recent advances in self-supervised learning have dramatically improved the state of the art on a wide variety of tasks.

The impact of lexical and grammatical processing on generating code from natural language

codegenfactors/BertranX Findings (ACL) 2022

Considering the seq2seq architecture of TranX for natural language to code translation, we identify four key components of importance: grammatical constraints, lexical preprocessing, input representations, and copy mechanisms.

CodeAttack: Code-Based Adversarial Attacks for Pre-trained Programming Language Models

reddy-lab-code-research/codeattack 31 May 2022

Pre-trained programming language (PL) models (such as CodeT5, CodeBERT, GraphCodeBERT, etc.,) have the potential to automate software engineering tasks involving code understanding and code generation.

Multi-lingual Evaluation of Code Generation Models

amazon-research/mbxp-exec-eval 26 Oct 2022

Using these benchmarks, we are able to assess the performance of code generation models in a multi-lingual fashion, and discovered generalization ability of language models on out-of-domain languages, advantages of multi-lingual models over mono-lingual, the ability of few-shot prompting to teach the model new languages, and zero-shot translation abilities even on mono-lingual settings.

NL2CMD: An Updated Workflow for Natural Language to Bash Commands Translation

magnumresearchgroup/magnum-nlc2cmd 15 Feb 2023

First, we describe a state-of-the-art translation model used to generate Bash Commands from the corresponding English text.