Search Results for author: Mohammed Latif Siddiq

Found 6 papers, 2 papers with code

Quality Assessment of Prompts Used in Code Generation

no code implementations15 Apr 2024 Mohammed Latif Siddiq, Simantika Dristi, Joy Saha, Joanna C. S. Santos

We found that code generation evaluation benchmarks mainly focused on Python and coding exercises and had very limited contextual dependencies to challenge the model.

Code Generation GPT-3.5 +1

Generate and Pray: Using SALLMS to Evaluate the Security of LLM Generated Code

no code implementations1 Nov 2023 Mohammed Latif Siddiq, Joanna C. S. Santos

This framework has three major components: a novel dataset of security-centric Python prompts, an evaluation environment to test the generated code, and novel metrics to evaluate the models' performance from the perspective of secure code generation.

Code Generation

A Lightweight Framework for High-Quality Code Generation

no code implementations17 Jul 2023 Mohammed Latif Siddiq, Beatrice Casey, Joanna C. S. Santos

FRANC includes a static filter to make the generated code compilable with heuristics and a quality-aware ranker to sort the code snippets based on a quality score.

Code Generation Prompt Engineering

Enhancing Automated Program Repair through Fine-tuning and Prompt Engineering

no code implementations16 Apr 2023 Rishov Paul, Md. Mohib Hossain, Mohammed Latif Siddiq, Masum Hasan, Anindya Iqbal, Joanna C. S. Santos

We applied PLBART and CodeT5, two state-of-the-art language models that are pre-trained with both PL and NL, on two such natural language-based program repair datasets and found that the pre-trained language models fine-tuned with datasets containing both code review and subsequent code changes notably outperformed each of the previous models.

GPT-3.5 Program Repair +1

Cannot find the paper you are looking for? You can Submit a new open access paper.