1 code implementation • 22 Jan 2024 • Maximilian Muschalik, Fabian Fumagalli, Barbara Hammer, Eyke Hüllermeier
While shallow decision trees may be interpretable, larger ensemble models like gradient-boosted trees, which often set the state of the art in machine learning problems involving tabular data, still remain black box models.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
1 code implementation • 13 Jun 2023 • Maximilian Muschalik, Fabian Fumagalli, Rohit Jagtani, Barbara Hammer, Eyke Hüllermeier
Post-hoc explanation techniques such as the well-established partial dependence plot (PDP), which investigates feature dependencies, are used in explainable artificial intelligence (XAI) to understand black-box machine learning models.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 2 Mar 2023 • Maximilian Muschalik, Fabian Fumagalli, Barbara Hammer, Eyke Hüllermeier
Existing methods for explainable artificial intelligence (XAI), including popular feature importance measures such as SAGE, are mostly restricted to the batch learning scenario.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +2
no code implementations • 1 Feb 2023 • Patrick Kolpaczki, Viktor Bengs, Maximilian Muschalik, Eyke Hüllermeier
The Shapley value, which is arguably the most popular approach for assigning a meaningful contribution value to players in a cooperative game, has recently been used intensively in explainable artificial intelligence.
no code implementations • 5 Sep 2022 • Fabian Fumagalli, Maximilian Muschalik, Eyke Hüllermeier, Barbara Hammer
Explainable Artificial Intelligence (XAI) has mainly focused on static learning scenarios so far.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1