Search Results for author: Michael Aerni

Found 4 papers, 4 papers with code

Evaluations of Machine Learning Privacy Defenses are Misleading

1 code implementation26 Apr 2024 Michael Aerni, Jie Zhang, Florian Tramèr

Empirical defenses for machine learning privacy forgo the provable guarantees of differential privacy in the hope of achieving higher utility while resisting realistic adversaries.

Strong inductive biases provably prevent harmless interpolation

1 code implementation18 Jan 2023 Michael Aerni, Marco Milanta, Konstantin Donhauser, Fanny Yang

Classical wisdom suggests that estimators should avoid fitting noise to achieve good generalization.

Inductive Bias

Interpolation can hurt robust generalization even when there is no noise

2 code implementations NeurIPS 2021 Konstantin Donhauser, Alexandru Ţifrea, Michael Aerni, Reinhard Heckel, Fanny Yang

Numerous recent works show that overparameterization implicitly reduces variance for min-norm interpolators and max-margin classifiers.

regression

Maximizing the robust margin provably overfits on noiseless data

1 code implementation ICML Workshop AML 2021 Konstantin Donhauser, Alexandru Tifrea, Michael Aerni, Reinhard Heckel, Fanny Yang

Numerous recent works show that overparameterization implicitly reduces variance, suggesting vanishing benefits for explicit regularization in high dimensions.

Attribute

Cannot find the paper you are looking for? You can Submit a new open access paper.