Search Results for author: Suvodeep Majumder

Found 9 papers, 6 papers with code

Fair-SSL: Building fair ML Software with less data

1 code implementation3 Nov 2021 Joymallya Chakraborty, Suvodeep Majumder, Huy Tu

Semi-supervised learning is a machine learning technique where, incrementally, labeled data is used to generate pseudo-labels for the rest of the data (and then all that data is used for model training).

Attribute Fairness

Fair Enough: Searching for Sufficient Measures of Fairness

1 code implementation25 Oct 2021 Suvodeep Majumder, Joymallya Chakraborty, Gina R. Bai, Kathryn T. Stolee, Tim Menzies

In summary, to simplify the fairness testing problem, we recommend the following steps: (1)~determine what type of fairness is desirable (and we offer a handful of such types); then (2) lookup those types in our clusters; then (3) just test for one item per cluster.

Fairness

Bias in Machine Learning Software: Why? How? What to do?

2 code implementations25 May 2021 Joymallya Chakraborty, Suvodeep Majumder, Tim Menzies

This paper postulates that the root causes of bias are the prior decisions that affect- (a) what data was selected and (b) the labels assigned to those examples.

Attribute BIG-bench Machine Learning +1

Early Life Cycle Software Defect Prediction. Why? How?

1 code implementation26 Nov 2020 N. C. Shrikanth, Suvodeep Majumder, Tim Menzies

Hence, defect predictors learned from the first 150 commits and four months perform just as well as anything else.

Transfer Learning

Revisiting Process versus Product Metrics: a Large Scale Analysis

no code implementations21 Aug 2020 Suvodeep Majumder, Pranav Mody, Tim Menzies

We find that some analytics in-the-small conclusions still hold when scaling up to analytics in-the-large.

Methods for Stabilizing Models across Large Samples of Projects (with case studies on Predicting Defect and Project Health)

1 code implementation6 Nov 2019 Suvodeep Majumder, Tianpei Xia, Rahul Krishna, Tim Menzies

To the best of our knowledge, STABILIZER is order of magnitude faster than the prior state-of-the-art transfer learners which seek to find conclusion stability, and these case studies are the largest demonstration of the generalizability of quantitative predictions of project quality yet reported in the SE literature.

Transfer Learning

500+ Times Faster Than Deep Learning (A Case Study Exploring Faster Methods for Text Mining StackOverflow)

no code implementations14 Feb 2018 Suvodeep Majumder, Nikhila Balaji, Katie Brey, Wei Fu, Tim Menzies

Deep learners utilizes extensive computational power and can take a long time to train-- making it difficult to widely validate and repeat and improve their results.

Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.