Search Results for author: Andrea Capiluppi

Found 6 papers, 2 papers with code

SATDAUG -- A Balanced and Augmented Dataset for Detecting Self-Admitted Technical Debt

no code implementations12 Mar 2024 Edi Sutoyo, Andrea Capiluppi

Self-admitted technical debt (SATD) refers to a form of technical debt in which developers explicitly acknowledge and document the existence of technical shortcuts, workarounds, or temporary solutions within the codebase.

Automated Approaches to Detect Self-Admitted Technical Debt: A Systematic Literature Review

1 code implementation19 Dec 2023 Edi Sutoyo, Andrea Capiluppi

In light of this, this systematic literature review proposes a taxonomy of feature extraction techniques and ML/DL algorithms used in technical debt detection: its objective is to compare and benchmark their performance in the examined studies.

GitRanking: A Ranking of GitHub Topics for Software Classification using Active Sampling

1 code implementation19 May 2022 Cezar Sas, Andrea Capiluppi, Claudio Di Sipio, Juri Di Rocco, Davide Di Ruscio

Finally, we show that GitRanking is a dynamically extensible method: it can currently accept further terms to be ranked with a minimum number of annotations ($\sim$ 15).

domain classification Specificity

Antipatterns in Software Classification Taxonomies

no code implementations19 Apr 2022 Cezar Sas, Andrea Capiluppi

This is a known issue, and requires the establishment of a classification of software types.

Classification

LabelGit: A Dataset for Software Repositories Classification using Attributed Dependency Graphs

no code implementations16 Mar 2021 Cezar Sas, Andrea Capiluppi

Using this dataset, we hope to aid the development of solutions that do not rely on proxies but use the entire source code to perform classification.

General Classification

The Prevalence of Errors in Machine Learning Experiments

no code implementations10 Sep 2019 Martin Shepperd, Yuchen Guo, Ning li, Mahir Arzoky, Andrea Capiluppi, Steve Counsell, Giuseppe Destefanis, Stephen Swift, Allan Tucker, Leila Yousefi

Objective: We investigate the incidence of errors in a sample of machine learning experiments in the domain of software defect prediction.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.