Search Results for author: Loris D'Antoni

Found 12 papers, 4 papers with code

Verified Training for Counterfactual Explanation Robustness under Data Shift

no code implementations6 Mar 2024 Anna P. Meyer, Yuhao Zhang, Aws Albarghouthi, Loris D'Antoni

Our empirical evaluation demonstrates that VeriTraCER generates CEs that (1) are verifiably robust to small model updates and (2) display competitive robustness to state-of-the-art approaches in handling empirical model updates including random initialization, leave-one-out, and distribution shifts.

counterfactual Counterfactual Explanation

The Dataset Multiplicity Problem: How Unreliable Data Impacts Predictions

1 code implementation20 Apr 2023 Anna P. Meyer, Aws Albarghouthi, Loris D'Antoni

We introduce dataset multiplicity, a way to study how inaccuracies, uncertainty, and social bias in training datasets impact test-time predictions.

counterfactual

PECAN: A Deterministic Certified Defense Against Backdoor Attacks

no code implementations27 Jan 2023 Yuhao Zhang, Aws Albarghouthi, Loris D'Antoni

Neural networks are vulnerable to backdoor poisoning attacks, where the attackers maliciously poison the training set and insert triggers into the test input to change the prediction of the victim model.

backdoor defense Image Classification +1

Certifying Data-Bias Robustness in Linear Regression

no code implementations7 Jun 2022 Anna P. Meyer, Aws Albarghouthi, Loris D'Antoni

Datasets typically contain inaccuracies due to human error and societal biases, and these inaccuracies can affect the outcomes of models trained on such datasets.

regression

BagFlip: A Certified Defense against Data Poisoning

1 code implementation26 May 2022 Yuhao Zhang, Aws Albarghouthi, Loris D'Antoni

Machine learning models are vulnerable to data-poisoning attacks, in which an attacker maliciously modifies the training set to change the prediction of a learned model.

Backdoor Attack Data Poisoning +2

Certifying Robustness to Programmable Data Bias in Decision Trees

no code implementations NeurIPS 2021 Anna P. Meyer, Aws Albarghouthi, Loris D'Antoni

To certify robustness, we use a novel symbolic technique to evaluate a decision-tree learner on a large, or infinite, number of datasets, certifying that each and every dataset produces the same prediction for a specific test point.

Fairness

Certified Robustness to Programmable Transformations in LSTMs

1 code implementation EMNLP 2021 Yuhao Zhang, Aws Albarghouthi, Loris D'Antoni

Deep neural networks for natural language processing are fragile in the face of adversarial examples -- small input perturbations, like synonym substitution or word duplication, which cause a neural network to change its prediction.

Robustness to Programmable String Transformations via Augmented Abstract Training

1 code implementation ICML 2020 Yuhao Zhang, Aws Albarghouthi, Loris D'Antoni

We then present an approach to adversarially training models that are robust to such user-defined string transformations.

Proving Data-Poisoning Robustness in Decision Trees

no code implementations2 Dec 2019 Samuel Drews, Aws Albarghouthi, Loris D'Antoni

Machine learning models are brittle, and small changes in the training data can result in different predictions.

BIG-bench Machine Learning Data Poisoning

Quantifying Program Bias

no code implementations17 Feb 2017 Aws Albarghouthi, Loris D'Antoni, Samuel Drews, Aditya Nori

With the range and sensitivity of algorithmic decisions expanding at a break-neck speed, it is imperative that we aggressively investigate whether programs are biased.

Decision Making Fairness

Fairness as a Program Property

no code implementations19 Oct 2016 Aws Albarghouthi, Loris D'Antoni, Samuel Drews, Aditya Nori

We explore the following question: Is a decision-making program fair, for some useful definition of fairness?

Decision Making Fairness

Learning Syntactic Program Transformations from Examples

no code implementations31 Aug 2016 Reudismam Rolim, Gustavo Soares, Loris D'Antoni, Oleksandr Polozov, Sumit Gulwani, Rohit Gheyi, Ryo Suzuki, Bjoern Hartmann

In the second domain, we use repetitive edits applied by developers to the same project to synthesize a program transformation that applies these edits to other locations in the code.

Cannot find the paper you are looking for? You can Submit a new open access paper.