no code implementations • 5 May 2024 • Aftab Hussain, Md Rafiqul Islam Rabin, Toufique Ahmed, Bowen Xu, Premkumar Devanbu, Mohammad Amin Alipour
Large language models (LLMs) have provided a lot of exciting new capabilities in software development.
no code implementations • 23 Feb 2024 • Aftab Hussain, Md Rafiqul Islam Rabin, Mohammad Amin Alipour
Trojan signatures, as described by Fields et al. (2021), are noticeable differences in the distribution of the trojaned class parameters (weights) and the non-trojaned class parameters of the trojaned model, that can be used to detect the trojaned model.
no code implementations • 3 Feb 2024 • Claudio Spiess, David Gros, Kunal Suresh Pai, Michael Pradel, Md Rafiqul Islam Rabin, Amin Alipour, Susmit Jha, Prem Devanbu, Toufique Ahmed
Our contributions will lead to better-calibrated decision-making in the current use of code generated by language models, and offers a framework for future research to further improve calibration methods for generative models in Software Engineering.
no code implementations • 8 Mar 2023 • Aftab Hussain, Md Rafiqul Islam Rabin, Bowen Xu, David Lo, Mohammad Amin Alipour
In this paper, we explore the impact of an unsuperivsed feature enrichment approach based on variable roles on the performance of neural models of code.
no code implementations • 3 Mar 2023 • Md Rafiqul Islam Rabin, Aftab Hussain, Sahil Suneja, Mohammad Amin Alipour
Understanding distractors provide a complementary view of the features' relevance in the predictions of neural models.
2 code implementations • 28 May 2022 • Md Rafiqul Islam Rabin, Aftab Hussain, Mohammad Amin Alipour
Our experiments on multiple models across different types of input programs show that the syntax-guided program reduction technique is faster and provides smaller sets of key tokens in reduced programs.
2 code implementations • 14 Feb 2022 • Md Rafiqul Islam Rabin
The code intelligence (CI) models are often black-box and do not offer any insights on the input features that they learn for making correct predictions.
no code implementations • 1 Nov 2021 • Md Rafiqul Islam Rabin, Mohammad Amin Alipour
We evaluate several variations of this representation and compare its performance with state-of-the-art representations that utilize the rich syntactic and semantic features of input programs.
2 code implementations • 16 Jun 2021 • Md Rafiqul Islam Rabin, Aftab Hussain, Mohammad Amin Alipour, Vincent J. Hellendoorn
The goal of this paper is to evaluate and compare the extent of memorization and generalization in neural code intelligence models.
2 code implementations • 7 Jun 2021 • Md Rafiqul Islam Rabin, Vincent J. Hellendoorn, Mohammad Amin Alipour
Our approach, SIVAND, uses simplification techniques that reduce the size of input programs of a CI model while preserving the predictions of the model.
1 code implementation • 31 Jul 2020 • Md Rafiqul Islam Rabin, Nghi D. Q. Bui, Ke Wang, Yijun Yu, Lingxiao Jiang, Mohammad Amin Alipour
With the prevalence of publicly available source code repositories to train deep neural network models, neural program models can do well in source code analysis tasks such as predicting method names in given programs that cannot be easily done by traditional program analysis techniques.
1 code implementation • 15 Apr 2020 • Md Rafiqul Islam Rabin, Mohammad Amin Alipour
The abundance of publicly available source code repositories, in conjunction with the advances in neural networks, has enabled data-driven approaches to program analysis.