no code implementations • 22 Mar 2022 • Protik Bose Pranto, Syed Zami-Ul-Haque Navid, Protik Dey, Gias Uddin, Anindya Iqbal
The best performing model is BERT, with an F1-score of 0. 97.
1 code implementation • 29 May 2021 • Masum Hasan, Tanveer Muttaqueen, Abdullah Al Ishtiaq, Kazi Sajeed Mehrab, Md. Mahim Anjum Haque, Tahmid Hasan, Wasi Uddin Ahmad, Anindya Iqbal, Rifat Shahriyar
In this study, we present CoDesc -- a large parallel dataset composed of 4. 2 million Java methods and natural language descriptions.
Ranked #1 on Code Search on CoDesc
no code implementations • 16 Apr 2021 • Abdullah Al Ishtiaq, Masum Hasan, Md. Mahim Anjum Haque, Kazi Sajeed Mehrab, Tanveer Muttaqueen, Tahmid Hasan, Anindya Iqbal, Rifat Shahriyar
In this work, we leverage the efficacy of these embedding models using a simple, lightweight 2-layer neural network in the task of semantic code search.
1 code implementation • Findings (NAACL) 2022 • Abhik Bhattacharjee, Tahmid Hasan, Wasi Uddin Ahmad, Kazi Samin, Md Saiful Islam, Anindya Iqbal, M. Sohel Rahman, Rifat Shahriyar
In this work, we introduce BanglaBERT, a BERT-based Natural Language Understanding (NLU) model pretrained in Bangla, a widely spoken yet low-resource language in the NLP literature.
1 code implementation • 4 Oct 2020 • Faria Huq, Nafees Ahmed, Anindya Iqbal
As the choice of words and syntax vary while preparing a textual description, it is challenging for the system to reliably produce a consistently desirable output from different forms of language input.
1 code implementation • 7 Dec 2019 • Md. Khairul Islam, Toufique Ahmed, Rifat Shahriyar, Anindya Iqbal, Gias Uddin
In our empirical study on the 146, 612 code changes from the three software projects, we find that (1) The new features like reviewer dimensions that are introduced in PredCR are the most informative.
1 code implementation • 12 May 2019 • Junaed Younus Khan, Md. Tawkat Islam Khondaker, Sadia Afroz, Gias Uddin, Anindya Iqbal
In this research, we conducted a benchmark study to assess the performance of different applicable machine learning approaches on three different datasets where we accumulated the largest and most diversified one.