no code implementations • 9 Jul 2023 • Prakhar Ganesh, Hongyan Chang, Martin Strobel, Reza Shokri
We investigate the impact on group fairness of different sources of randomness in training neural networks.
no code implementations • 22 Jun 2023 • Xudong Shen, Hannah Brown, Jiashu Tao, Martin Strobel, Yao Tong, Akshay Narayan, Harold Soh, Finale Doshi-Velez
There is increasing attention being given to how to regulate AI systems.
no code implementations • 11 Feb 2023 • Jeremiah Zhe Liu, Krishnamurthy Dj Dvijotham, Jihyeon Lee, Quan Yuan, Martin Strobel, Balaji Lakshminarayanan, Deepak Ramachandran
Standard empirical risk minimization (ERM) training can produce deep neural network (DNN) models that are accurate on average but under-perform in under-represented population subgroups, especially when there are imbalanced group distributions in the long-tailed training data.
no code implementations • 14 Sep 2022 • Martin Strobel, Reza Shokri
The privacy risks of machine learning models is a major concern when training them on sensitive and personal data.
no code implementations • 16 Jun 2020 • Neel Patel, Martin Strobel, Yair Zick
We propose a new axiomatization for a generalization of the Banzhaf index; our method can also be thought of as an approximation of a black-box model by a higher-order polynomial.
no code implementations • 29 Jun 2019 • Reza Shokri, Martin Strobel, Yair Zick
We analyze connections between model explanations and the leakage of sensitive information about the model's training set.
no code implementations • 7 Aug 2017 • Jakub Sliwinski, Martin Strobel, Yair Zick
We study the following problem: given a labeled dataset and a specific datapoint x, how did the i-th feature influence the classification for x?