1 code implementation • 14 Apr 2021 • Kim de Bie, Ana Lucic, Hinda Haned
In hybrid human-AI systems, users need to decide whether or not to trust an algorithmic prediction while the true error in the prediction is unknown.
1 code implementation • 7 Aug 2020 • Phillip Lippe, Pengjie Ren, Hinda Haned, Bart Voorn, Maarten de Rijke
Instead of generating a response from scratch, P2-Net generates system responses by paraphrasing template-based responses.
1 code implementation • 27 Nov 2019 • Ana Lucic, Harrie Oosterhuis, Hinda Haned, Maarten de Rijke
Model interpretability has become an important problem in machine learning (ML) due to the increased effect that algorithmic decisions have on humans.
1 code implementation • 17 Jul 2019 • Ana Lucic, Hinda Haned, Maarten de Rijke
Given a large error, MC-BRP determines (1) feature values that would result in a reasonable prediction, and (2) general trends between each feature and the target, both based on Monte Carlo simulations.
no code implementations • 5 Jul 2019 • Ilse van der Linden, Hinda Haned, Evangelos Kanoulas
We present Global Aggregations of Local Explanations (GALE) with the objective to provide insights in a model's global decision making process.
no code implementations • 4 Jul 2019 • Ana Lucic, Hinda Haned, Maarten de Rijke
Understanding how "black-box" models arrive at their predictions has sparked significant interest from both within and outside the AI community.