Who's responsible? Jointly quantifying the contribution of the learning algorithm and training data

9 Oct 2019  ·  Gal Yona, Amirata Ghorbani, James Zou ·

A learning algorithm $A$ trained on a dataset $D$ is revealed to have poor performance on some subpopulation at test time. Where should the responsibility for this lay? It can be argued that the data is responsible, if for example training $A$ on a more representative dataset $D'$ would have improved the performance. But it can similarly be argued that $A$ itself is at fault, if training a different variant $A'$ on the same dataset $D$ would have improved performance. As ML becomes widespread and such failure cases more common, these types of questions are proving to be far from hypothetical. With this motivation in mind, in this work we provide a rigorous formulation of the joint credit assignment problem between a learning algorithm $A$ and a dataset $D$. We propose Extended Shapley as a principled framework for this problem, and experiment empirically with how it can be used to address questions of ML accountability.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods