Method name prediction
14 papers with code • 1 benchmarks • 1 datasets
Libraries
Use these libraries to find Method name prediction models and implementationsLatest papers
Studying Vulnerable Code Entities in R
Pre-trained Code Language Models (Code-PLMs) have shown many advancements and achieved state-of-the-art results for many software engineering tasks in the past few years.
TransformCode: A Contrastive Learning Framework for Code Embedding via Subtree transformation
The main reason for this is that encoding each code token would cause model parameter inflation, resulting in a lot of parameters storing information that we are not very concerned about.
Assessing Project-Level Fine-Tuning of ML4SE Models
We evaluate three models of different complexity and compare their quality in three settings: trained on a large dataset of Java projects, further fine-tuned on the data from a particular project, and trained from scratch on this data.
Syntax-Guided Program Reduction for Understanding Neural Code Intelligence Models
Our experiments on multiple models across different types of input programs show that the syntax-guided program reduction technique is faster and provides smaller sets of key tokens in reduced programs.
Extracting Label-specific Key Input Features for Neural Code Intelligence Models
The code intelligence (CI) models are often black-box and do not offer any insights on the input features that they learn for making correct predictions.
Memorization and Generalization in Neural Code Intelligence Models
The goal of this paper is to evaluate and compare the extent of memorization and generalization in neural code intelligence models.
Understanding Neural Code Intelligence Through Program Simplification
Our approach, SIVAND, uses simplification techniques that reduce the size of input programs of a CI model while preserving the predictions of the model.
PSIMiner: A Tool for Mining Rich Abstract Syntax Trees from Code
PSI trees contain code syntax trees as well as functions to work with them, and therefore can be used to enrich code representation using static analysis algorithms of modern IDEs.
Towards Demystifying Dimensions of Source Code Embeddings
A popular approach in representing source code is neural source code embeddings that represents programs with high-dimensional vectors computed by training deep neural networks on a large volume of programs.
On the Generalizability of Neural Program Models with respect to Semantic-Preserving Program Transformations
With the prevalence of publicly available source code repositories to train deep neural network models, neural program models can do well in source code analysis tasks such as predicting method names in given programs that cannot be easily done by traditional program analysis techniques.