Search Results for author: Hubert Baniecki

Found 16 papers, 14 papers with code

Red-Teaming Segment Anything Model

1 code implementation2 Apr 2024 Krzysztof Jankowski, Bartlomiej Sobieski, Mateusz Kwiatkowski, Jakub Szulc, Michal Janik, Hubert Baniecki, Przemyslaw Biecek

Foundation models have emerged as pivotal tools, tackling many complex tasks through pre-training on vast datasets and subsequent fine-tuning for specific applications.

Image Segmentation Segmentation +2

Interpretable Machine Learning for Survival Analysis

1 code implementation15 Mar 2024 Sophie Hanna Langbein, Mateusz Krzyziński, Mikołaj Spytek, Hubert Baniecki, Przemysław Biecek, Marvin N. Wright

With the spread and rapid advancement of black box machine learning models, the field of interpretable machine learning (IML) or explainable artificial intelligence (XAI) has become increasingly important over the last decade.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +4

Red Teaming Models for Hyperspectral Image Analysis Using Explainable AI

no code implementations12 Mar 2024 Vladimir Zaigrajew, Hubert Baniecki, Lukasz Tulczyjew, Agata M. Wijata, Jakub Nalepa, Nicolas Longépé, Przemyslaw Biecek

Remote sensing (RS) applications in the space domain demand machine learning (ML) models that are reliable, robust, and quality-assured, making red teaming a vital approach for identifying and exposing potential flaws and biases.

Hyperspectral image analysis HYPERVIEW Challenge

Be Careful When Evaluating Explanations Regarding Ground Truth

1 code implementation8 Nov 2023 Hubert Baniecki, Maciej Chrabaszcz, Andreas Holzinger, Bastian Pfeifer, Anna Saranti, Przemyslaw Biecek

Evaluating explanations of image classifiers regarding ground truth, e. g. segmentation masks defined by human perception, primarily evaluates the quality of the models under consideration rather than the explanation methods themselves.

survex: an R package for explaining machine learning survival models

1 code implementation30 Aug 2023 Mikołaj Spytek, Mateusz Krzyziński, Sophie Hanna Langbein, Hubert Baniecki, Marvin N. Wright, Przemysław Biecek

Due to their flexibility and superior performance, machine learning models frequently complement and outperform traditional statistical survival models.

Decision Making Explainable artificial intelligence

Explaining and visualizing black-box models through counterfactual paths

1 code implementation15 Jul 2023 Bastian Pfeifer, Mateusz Krzyzinski, Hubert Baniecki, Anna Saranti, Andreas Holzinger, Przemyslaw Biecek

Explainable AI (XAI) is an increasingly important area of machine learning research, which aims to make black-box models transparent and interpretable.

counterfactual Explainable Artificial Intelligence (XAI) +2

Adversarial attacks and defenses in explainable artificial intelligence: A survey

1 code implementation6 Jun 2023 Hubert Baniecki, Przemyslaw Biecek

Explainable artificial intelligence (XAI) methods are portrayed as a remedy for debugging and trusting statistical and deep learning models, as well as interpreting their predictions.

Decision Making Explainable artificial intelligence +2

Towards Evaluating Explanations of Vision Transformers for Medical Imaging

1 code implementation12 Apr 2023 Piotr Komorowski, Hubert Baniecki, Przemysław Biecek

Our findings provide insights into the applicability of ViT explanations in medical imaging and highlight the importance of using appropriate evaluation criteria for comparing them.

Decision Making Image Classification

Interpretable machine learning for time-to-event prediction in medicine and healthcare

3 code implementations17 Mar 2023 Hubert Baniecki, Bartlomiej Sobieski, Patryk Szatkowski, Przemyslaw Bombinski, Przemyslaw Biecek

Time-to-event prediction, e. g. cancer survival analysis or hospital length of stay, is a highly prominent machine learning task in medical and healthcare applications.

Decision Making Feature Importance +4

Performance is not enough: the story told by a Rashomon quartet

1 code implementation26 Feb 2023 Przemyslaw Biecek, Hubert Baniecki, Mateusz Krzyzinski, Dianne Cook

The usual goal of supervised learning is to find the best model, the one that optimizes a particular performance measure.

SurvSHAP(t): Time-dependent explanations of machine learning survival models

1 code implementation23 Aug 2022 Mateusz Krzyziński, Mikołaj Spytek, Hubert Baniecki, Przemysław Biecek

Experiments on synthetic and medical data confirm that SurvSHAP(t) can detect variables with a time-dependent effect, and its aggregation is a better determinant of the importance of variables for a prediction than SurvLIME.

Time-to-Event Prediction

Graph-guided random forest for gene set selection

1 code implementation26 Aug 2021 Bastian Pfeifer, Hubert Baniecki, Anna Saranti, Przemyslaw Biecek, Andreas Holzinger

To demonstrate a concrete application example, we focus on bioinformatics, systems biology and particularly biomedicine, but the presented methodology is applicable in many other domains as well.

Do not explain without context: addressing the blind spot of model explanations

no code implementations28 May 2021 Katarzyna Woźnica, Katarzyna Pękala, Hubert Baniecki, Wojciech Kretowicz, Elżbieta Sienkiewicz, Przemysław Biecek

The increasing number of regulations and expectations of predictive machine learning models, such as so called right to explanation, has led to a large number of methods promising greater interpretability.

BIG-bench Machine Learning Explainable Artificial Intelligence (XAI)

Fooling Partial Dependence via Data Poisoning

1 code implementation26 May 2021 Hubert Baniecki, Wojciech Kretowicz, Przemyslaw Biecek

We believe this to be the first work using a genetic algorithm for manipulating explanations, which is transferable as it generalizes both ways: in a model-agnostic and an explanation-agnostic manner.

Data Poisoning

dalex: Responsible Machine Learning with Interactive Explainability and Fairness in Python

1 code implementation28 Dec 2020 Hubert Baniecki, Wojciech Kretowicz, Piotr Piatyszek, Jakub Wisniewski, Przemyslaw Biecek

The increasing amount of available data, computing power, and the constant pursuit for higher performance results in the growing complexity of predictive models.

BIG-bench Machine Learning Fairness

The Grammar of Interactive Explanatory Model Analysis

1 code implementation1 May 2020 Hubert Baniecki, Dariusz Parzych, Przemyslaw Biecek

We conduct a user study to evaluate the usefulness of IEMA, which indicates that an interactive sequential analysis of a model increases the performance and confidence of human decision making.

BIG-bench Machine Learning Decision Making +1

Cannot find the paper you are looking for? You can Submit a new open access paper.