Search Results for author: Fatemeh H. Fard

Found 11 papers, 3 papers with code

Do Current Language Models Support Code Intelligence for R Programming Language?

no code implementations10 Oct 2024 Zixiao Zhao, Fatemeh H. Fard

Though these models have achieved the state-of-the-art performance for SE tasks for many popular programming languages, such as Java and Python, the Scientific Software and its related languages like R programming language have rarely benefited or even been evaluated with the Code-PLMs.

Code Summarization Method name prediction

A Survey on LLM-based Code Generation for Low-Resource and Domain-Specific Programming Languages

no code implementations4 Oct 2024 Sathvik Joel, Jie JW Wu, Fatemeh H. Fard

This survey serves as a resource for researchers and practitioners at the intersection of LLMs, software engineering, and specialized programming languages, laying the groundwork for future advancements in code generation for LRPLs and DSLs.

Code Generation

MergeRepair: An Exploratory Study on Merging Task-Specific Adapters in Code LLMs for Automated Program Repair

no code implementations18 Aug 2024 Meghdad Dehghan, Jie JW Wu, Fatemeh H. Fard, Ali Ouni

[Context] Large Language Models (LLMs) have shown good performance in several software development-related tasks such as program repair, documentation, code refactoring, debugging, and testing.

parameter-efficient fine-tuning Program Repair

Empirical Studies of Parameter Efficient Methods for Large Language Models of Code and Knowledge Transfer to R

no code implementations16 Mar 2024 Amirreza Esmaeili, Iman Saberi, Fatemeh H. Fard

We will assess their performance compared to fully fine-tuned models, whether they can be used for knowledge transfer from natural language models to code (using T5 and Llama models), and their ability to adapt the learned knowledge to an unseen language.

parameter-efficient fine-tuning Transfer Learning

Studying Vulnerable Code Entities in R

1 code implementation6 Feb 2024 Zixiao Zhao, Millon Madhur Das, Fatemeh H. Fard

Pre-trained Code Language Models (Code-PLMs) have shown many advancements and achieved state-of-the-art results for many software engineering tasks in the past few years.

Code Summarization Method name prediction

On The Cross-Modal Transfer from Natural Language to Code through Adapter Modules

1 code implementation19 Apr 2022 Divyam Goel, Ramansh Grover, Fatemeh H. Fard

Although adapters are known to facilitate adapting to many downstream tasks compared to fine-tuning the model that require retraining all of the models' parameters -- which owes to the adapters' plug and play nature and being parameter efficient -- their usage in software engineering is not explored.

Clone Detection Cloze Test +1

On the Effectiveness of Pretrained Models for API Learning

no code implementations5 Apr 2022 Mohammad Abdul Hadi, Imam Nur Bani Yusuf, Ferdian Thung, Kien Gia Luong, Jiang Lingxiao, Fatemeh H. Fard, David Lo

We have also identified two different tokenization approaches that can contribute to a significant boost in PTMs' performance for the API sequence generation task.

Information Retrieval Language Modelling +2

API2Com: On the Improvement of Automatically Generated Code Comments Using API Documentations

no code implementations19 Mar 2021 Ramin Shahbazi, Rishab Sharma, Fatemeh H. Fard

However, as the number of APIs that are used in a method increases, the performance of the model in generating comments decreases due to long documentations used in the input.

Comment Generation Machine Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.