Source Code Summarization
38 papers with code • 9 benchmarks • 7 datasets
Code Summarization is a task that tries to comprehend code and automatically generate descriptions directly from the source code.
Source: Improving Automatic Source Code Summarization via Deep Reinforcement Learning
Libraries
Use these libraries to find Source Code Summarization models and implementationsDatasets
Most implemented papers
A Transformer-based Approach for Source Code Summarization
Generating a readable summary that describes the functionality of a program is known as source code summarization.
Recommendations for Datasets for Source Code Summarization
The main use for these descriptions is in software documentation e. g. the one-sentence Java method descriptions in JavaDocs.
code2seq: Generating Sequences from Structured Representations of Code
The ability to generate natural language sequences from source code snippets has a variety of applications such as code summarization, documentation, and retrieval.
Structured Neural Summarization
Summarization of long sequences into a concise statement is a core problem in natural language processing, requiring non-trivial understanding of the input.
Improving Automatic Source Code Summarization via Deep Reinforcement Learning
To the best of our knowledge, most state-of-the-art approaches follow an encoder-decoder framework which encodes the code into a hidden space and then decode it into natural language space, suffering from two major drawbacks: a) Their encoders only consider the sequential content of code, ignoring the tree structure which is also critical for the task of code summarization, b) Their decoders are typically trained to predict the next word by maximizing the likelihood of next ground-truth word with previous ground-truth word given.
Code Generation as a Dual Task of Code Summarization
Code summarization (CS) and code generation (CG) are two crucial tasks in the field of automatic software development.
Improved Code Summarization via a Graph Neural Network
The first approaches to use structural information flattened the AST into a sequence.
Unified Pre-training for Program Understanding and Generation
Experiments on code summarization in the English language, code generation, and code translation in seven programming languages show that PLBART outperforms or rivals state-of-the-art models.
HAConvGNN: Hierarchical Attention Based Convolutional Graph Neural Network for Code Documentation Generation in Jupyter Notebooks
Jupyter notebook allows data scientists to write machine learning code together with its documentation in cells.