no code implementations • 10 Oct 2024 • Zixiao Zhao, Fatemeh H. Fard
Though these models have achieved the state-of-the-art performance for SE tasks for many popular programming languages, such as Java and Python, the Scientific Software and its related languages like R programming language have rarely benefited or even been evaluated with the Code-PLMs.
no code implementations • 4 Oct 2024 • Sathvik Joel, Jie JW Wu, Fatemeh H. Fard
This survey serves as a resource for researchers and practitioners at the intersection of LLMs, software engineering, and specialized programming languages, laying the groundwork for future advancements in code generation for LRPLs and DSLs.
no code implementations • 18 Aug 2024 • Meghdad Dehghan, Jie JW Wu, Fatemeh H. Fard, Ali Ouni
[Context] Large Language Models (LLMs) have shown good performance in several software development-related tasks such as program repair, documentation, code refactoring, debugging, and testing.
1 code implementation • 19 Jun 2024 • Davit Abrahamyan, Fatemeh H. Fard
Developers spend much time finding information that is relevant to their questions.
no code implementations • 16 Mar 2024 • Amirreza Esmaeili, Iman Saberi, Fatemeh H. Fard
We will assess their performance compared to fully fine-tuned models, whether they can be used for knowledge transfer from natural language models to code (using T5 and Llama models), and their ability to adapt the learned knowledge to an unseen language.
1 code implementation • 6 Feb 2024 • Zixiao Zhao, Millon Madhur Das, Fatemeh H. Fard
Pre-trained Code Language Models (Code-PLMs) have shown many advancements and achieved state-of-the-art results for many software engineering tasks in the past few years.
1 code implementation • 19 Apr 2022 • Divyam Goel, Ramansh Grover, Fatemeh H. Fard
Although adapters are known to facilitate adapting to many downstream tasks compared to fine-tuning the model that require retraining all of the models' parameters -- which owes to the adapters' plug and play nature and being parameter efficient -- their usage in software engineering is not explored.
no code implementations • 5 Apr 2022 • Mohammad Abdul Hadi, Imam Nur Bani Yusuf, Ferdian Thung, Kien Gia Luong, Jiang Lingxiao, Fatemeh H. Fard, David Lo
We have also identified two different tokenization approaches that can contribute to a significant boost in PTMs' performance for the API sequence generation task.
no code implementations • 4 Feb 2022 • Yue Cao, Fatemeh H. Fard
In this paper, we evaluate PTMs to generate replies to the mobile app user feedbacks.
no code implementations • 12 Apr 2021 • Mohammad Abdul Hadi, Fatemeh H. Fard
In addition, we investigate the performance of the PTMs trained on app reviews (i. e. domain-specific PTMs) .
no code implementations • 19 Mar 2021 • Ramin Shahbazi, Rishab Sharma, Fatemeh H. Fard
However, as the number of APIs that are used in a method increases, the performance of the model in generating comments decreases due to long documentations used in the input.