1 code implementation • 13 Sep 2023 • Harrison Rosenberg, Shimaa Ahmed, Guruprasad V Ramesh, Ramya Korlakai Vinayak, Kassem Fawaz
In particular, their ability to synthesize and modify human faces has spurred research into using generated face images in both training data augmentation and model performance assessments.
no code implementations • 9 Feb 2022 • Harrison Rosenberg, Robi Bhattacharjee, Kassem Fawaz, Somesh Jha
Given the prevalence of ERM sample complexity bounds, our proposed framework enables machine learning practitioners to easily understand the convergence behavior of multicalibration error for a myriad of classifier architectures.
1 code implementation • 5 Aug 2021 • Harrison Rosenberg, Brian Tang, Kassem Fawaz, Somesh Jha
We answer this question with an analytical and empirical exploration of recent face obfuscation systems.
no code implementations • 3 Mar 2020 • Yue Gao, Harrison Rosenberg, Kassem Fawaz, Somesh Jha, Justin Hsu
In test-time attacks an adversary crafts adversarial examples, which are specially crafted perturbations imperceptible to humans which, when added to an input example, force a machine learning model to misclassify the given input example.
no code implementations • 8 Nov 2018 • Zachary Charles, Harrison Rosenberg, Dimitris Papailiopoulos
We show that these "transferable adversarial directions" are guaranteed to exist for linear separators of a given set, and will exist with high probability for linear classifiers trained on independent sets drawn from the same distribution.