Search Results for author: Saeed Parsa

Found 8 papers, 2 papers with code

Natural Language Requirements Testability Measurement Based on Requirement Smells

1 code implementation26 Mar 2024 Morteza Zakeri-Nasrabadi, Saeed Parsa

However, despite the importance of measuring and quantifying requirements testability, no automatic approach for measuring requirements testability has been proposed based on the requirements smells, which are at odds with the requirements testability.

Mitigating Backdoors within Deep Neural Networks in Data-limited Configuration

no code implementations13 Nov 2023 Soroush Hashemifar, Saeed Parsa, Morteza Zakeri-Nasrabadi

As the capacity of deep neural networks (DNNs) increases, their need for huge amounts of data significantly grows.

Path Analysis for Effective Fault Localization in Deep Neural Networks

no code implementations29 Oct 2023 Soroush Hashemifar, Saeed Parsa, Akram Kalaee

Deep learning has revolutionized various real-world applications, but the quality of Deep Neural Networks (DNNs) remains a concern.

Fault Detection Fault localization

A systematic literature review on source code similarity measurement and clone detection: techniques, applications, and challenges

no code implementations28 Jun 2023 Morteza Zakeri-Nasrabadi, Saeed Parsa, Mohammad Ramezani, Chanchal Roy, Masoud Ekhtiarzadeh

This paper proposes a systematic literature review and meta-analysis on code similarity measurement and evaluation techniques to shed light on the existing approaches and their characteristics in different applications.

Clone Detection

A systematic literature review on the code smells datasets and validation mechanisms

no code implementations2 Jun 2023 Morteza Zakeri-Nasrabadi, Saeed Parsa, Ehsan Esmaili, Fabio Palomba

The accuracy reported for code smell-detecting tools varies depending on the dataset used to evaluate the tools.

Learning to predict test effectiveness

no code implementations20 Aug 2022 Morteza Zakeri-Nasrabadi, Saeed Parsa

Compared with the state-of-the-art coverage prediction models, our models improve MAE, MSE, and an R2-score by 5. 78%, 2. 84%, and 20. 71%, respectively.

Feature Importance

An ensemble meta-estimator to predict source code testability

1 code implementation20 Aug 2022 Morteza Zakeri-Nasrabadi, Saeed Parsa

Therefore, testability can be measured based on the coverage and number of test cases provided by a test suite, considering the test budget.

Feature Importance regression

Format-aware Learn&Fuzz: Deep Test Data Generation for Efficient Fuzzing

no code implementations24 Dec 2018 Morteza Zakeri Nasrabadi, Saeed Parsa, Akram Kalaee

Our approach generates new test data while distinguishes between data and meta-data that makes it possible to target both the parsing and rendering parts of software under test (SUT).

Software Engineering

Cannot find the paper you are looking for? You can Submit a new open access paper.