no code implementations • 13 Jul 2020 • Laura Rieger, Lars Kai Hansen
With machine learning models being used for more sensitive applications, we rely on interpretability methods to prove that no discriminating attributes were used for classification.
1 code implementation • 9 Jul 2020 • Laura Rieger, Rasmus M. Th. Høegh, Lars K. Hansen
We present a federated learning approach for learning a client adaptable, robust model when data is non-identically and non-independently distributed (non-IID) across clients.
1 code implementation • 9 Mar 2020 • Laura Rieger, Lars Kai Hansen
The adoption of machine learning in health care hinges on the transparency of the used algorithms, necessitating the need for explanation methods.
4 code implementations • ICML 2020 • Laura Rieger, Chandan Singh, W. James Murdoch, Bin Yu
For an explanation of a deep learning model to be effective, it must provide both insight into a model and suggest a corresponding action in order to achieve some objective.
no code implementations • 25 Sep 2019 • Laura Rieger, Lars Kai Hansen
Despite a growing literature on explaining neural networks, no consensus has been reached on how to explain a neural network decision or how to evaluate an explanation.
no code implementations • 1 Mar 2019 • Laura Rieger, Lars Kai Hansen
Despite a growing literature on explaining neural networks, no consensus has been reached on how to explain a neural network decision or how to evaluate an explanation.