2 code implementations • 10 Nov 2022 • Suvodeep Majumder, Joymallya Chakraborty, Tim Menzies
Hence, there are often limits on how much-labeled data is available for training.
no code implementations • 19 Oct 2022 • Suvodeep Majumder, Stanislas Lauly, Maria Nadejde, Marcello Federico, Georgiana Dinu
This paper addresses the task of contextual translation using multi-segment models.
1 code implementation • 3 Nov 2021 • Joymallya Chakraborty, Suvodeep Majumder, Huy Tu
Semi-supervised learning is a machine learning technique where, incrementally, labeled data is used to generate pseudo-labels for the rest of the data (and then all that data is used for model training).
1 code implementation • 25 Oct 2021 • Suvodeep Majumder, Joymallya Chakraborty, Gina R. Bai, Kathryn T. Stolee, Tim Menzies
In summary, to simplify the fairness testing problem, we recommend the following steps: (1)~determine what type of fairness is desirable (and we offer a handful of such types); then (2) lookup those types in our clusters; then (3) just test for one item per cluster.
2 code implementations • 25 May 2021 • Joymallya Chakraborty, Suvodeep Majumder, Tim Menzies
This paper postulates that the root causes of bias are the prior decisions that affect- (a) what data was selected and (b) the labels assigned to those examples.
1 code implementation • 26 Nov 2020 • N. C. Shrikanth, Suvodeep Majumder, Tim Menzies
Hence, defect predictors learned from the first 150 commits and four months perform just as well as anything else.
no code implementations • 21 Aug 2020 • Suvodeep Majumder, Pranav Mody, Tim Menzies
We find that some analytics in-the-small conclusions still hold when scaling up to analytics in-the-large.
1 code implementation • 6 Nov 2019 • Suvodeep Majumder, Tianpei Xia, Rahul Krishna, Tim Menzies
To the best of our knowledge, STABILIZER is order of magnitude faster than the prior state-of-the-art transfer learners which seek to find conclusion stability, and these case studies are the largest demonstration of the generalizability of quantitative predictions of project quality yet reported in the SE literature.
no code implementations • 14 Feb 2018 • Suvodeep Majumder, Nikhila Balaji, Katie Brey, Wei Fu, Tim Menzies
Deep learners utilizes extensive computational power and can take a long time to train-- making it difficult to widely validate and repeat and improve their results.