Search Results for author: Masoud Hashemi

Found 3 papers, 1 papers with code

Scalable Whitebox Attacks on Tree-based Models

no code implementations31 Mar 2022 Giuseppe Castiglione, Gavin Ding, Masoud Hashemi, Christopher Srinivasa, Ga Wu

Adversarial robustness is one of the essential safety criteria for guaranteeing the reliability of machine learning models.

Adversarial Robustness

PUMA: Performance Unchanged Model Augmentation for Training Data Removal

no code implementations2 Mar 2022 Ga Wu, Masoud Hashemi, Christopher Srinivasa

It then complements the negative impact of removing marked data by reweighting the remaining data optimally.

Model Optimization

PermuteAttack: Counterfactual Explanation of Machine Learning Credit Scorecards

1 code implementation24 Aug 2020 Masoud Hashemi, Ali Fathi

We propose a model criticism and explanation framework based on adversarially generated counterfactual examples for tabular data.

Adversarial Attack BIG-bench Machine Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.