Search Results for author: Hubert Baniecki

Found 13 papers, 12 papers with code

Be Careful When Evaluating Explanations Regarding Ground Truth

1 code implementation8 Nov 2023 Hubert Baniecki, Maciej Chrabaszcz, Andreas Holzinger, Bastian Pfeifer, Anna Saranti, Przemyslaw Biecek

Evaluating explanations of image classifiers regarding ground truth, e. g. segmentation masks defined by human perception, primarily evaluates the quality of the models under consideration rather than the explanation methods themselves.

survex: an R package for explaining machine learning survival models

1 code implementation30 Aug 2023 Mikołaj Spytek, Mateusz Krzyziński, Sophie Hanna Langbein, Hubert Baniecki, Marvin N. Wright, Przemysław Biecek

Due to their flexibility and superior performance, machine learning models frequently complement and outperform traditional statistical survival models.

Decision Making Explainable artificial intelligence

Explaining and visualizing black-box models through counterfactual paths

1 code implementation15 Jul 2023 Bastian Pfeifer, Mateusz Krzyzinski, Hubert Baniecki, Anna Saranti, Andreas Holzinger, Przemyslaw Biecek

Explainable AI (XAI) is an increasingly important area of machine learning research, which aims to make black-box models transparent and interpretable.

counterfactual Explainable Artificial Intelligence (XAI) +2

Adversarial attacks and defenses in explainable artificial intelligence: A survey

1 code implementation6 Jun 2023 Hubert Baniecki, Przemyslaw Biecek

Explainable artificial intelligence (XAI) methods are portrayed as a remedy for debugging and trusting statistical and deep learning models, as well as interpreting their predictions.

Decision Making Explainable artificial intelligence +2

Towards Evaluating Explanations of Vision Transformers for Medical Imaging

1 code implementation12 Apr 2023 Piotr Komorowski, Hubert Baniecki, Przemysław Biecek

Our findings provide insights into the applicability of ViT explanations in medical imaging and highlight the importance of using appropriate evaluation criteria for comparing them.

Decision Making Image Classification

Performance is not enough: the story told by a Rashomon quartet

1 code implementation26 Feb 2023 Przemyslaw Biecek, Hubert Baniecki, Mateusz Krzyzinski, Dianne Cook

But what if the second-best model describes the data in a completely different way?

SurvSHAP(t): Time-dependent explanations of machine learning survival models

1 code implementation23 Aug 2022 Mateusz Krzyziński, Mikołaj Spytek, Hubert Baniecki, Przemysław Biecek

Experiments on synthetic and medical data confirm that SurvSHAP(t) can detect variables with a time-dependent effect, and its aggregation is a better determinant of the importance of variables for a prediction than SurvLIME.

Time-to-Event Prediction

Graph-guided random forest for gene set selection

1 code implementation26 Aug 2021 Bastian Pfeifer, Hubert Baniecki, Anna Saranti, Przemyslaw Biecek, Andreas Holzinger

To demonstrate a concrete application example, we focus on bioinformatics, systems biology and particularly biomedicine, but the presented methodology is applicable in many other domains as well.

Do not explain without context: addressing the blind spot of model explanations

no code implementations28 May 2021 Katarzyna Woźnica, Katarzyna Pękala, Hubert Baniecki, Wojciech Kretowicz, Elżbieta Sienkiewicz, Przemysław Biecek

The increasing number of regulations and expectations of predictive machine learning models, such as so called right to explanation, has led to a large number of methods promising greater interpretability.

BIG-bench Machine Learning Explainable Artificial Intelligence (XAI)

Fooling Partial Dependence via Data Poisoning

1 code implementation26 May 2021 Hubert Baniecki, Wojciech Kretowicz, Przemyslaw Biecek

We believe this to be the first work using a genetic algorithm for manipulating explanations, which is transferable as it generalizes both ways: in a model-agnostic and an explanation-agnostic manner.

Data Poisoning

dalex: Responsible Machine Learning with Interactive Explainability and Fairness in Python

1 code implementation28 Dec 2020 Hubert Baniecki, Wojciech Kretowicz, Piotr Piatyszek, Jakub Wisniewski, Przemyslaw Biecek

The increasing amount of available data, computing power, and the constant pursuit for higher performance results in the growing complexity of predictive models.

BIG-bench Machine Learning Fairness

The Grammar of Interactive Explanatory Model Analysis

1 code implementation1 May 2020 Hubert Baniecki, Dariusz Parzych, Przemyslaw Biecek

We conduct a user study to evaluate the usefulness of IEMA, which indicates that an interactive sequential analysis of a model increases the performance and confidence of human decision making.

BIG-bench Machine Learning Decision Making +1

Cannot find the paper you are looking for? You can Submit a new open access paper.