Search Results for author: Przemysław Biecek

Found 33 papers, 20 papers with code

CNN-based explanation ensembling for dataset, representation and explanations evaluation

no code implementations16 Apr 2024 Weronika Hryniewska-Guzik, Luca Longo, Przemysław Biecek

Explainable Artificial Intelligence has gained significant attention due to the widespread use of complex deep learning models in high-stake domains such as medicine, finance, and autonomous cars.

Explainable artificial intelligence

Interpretable Machine Learning for Survival Analysis

1 code implementation15 Mar 2024 Sophie Hanna Langbein, Mateusz Krzyziński, Mikołaj Spytek, Hubert Baniecki, Przemysław Biecek, Marvin N. Wright

With the spread and rapid advancement of black box machine learning models, the field of interpretable machine learning (IML) or explainable artificial intelligence (XAI) has become increasingly important over the last decade.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +4

Underestimation of lung regions on chest X-ray segmentation masks assessed by comparison with total lung volume evaluated on computed tomography

no code implementations18 Feb 2024 Przemysław Bombiński, Patryk Szatkowski, Bartłomiej Sobieski, Tymoteusz Kwieciński, Szymon Płotka, Mariusz Adamek, Marcin Banasiuk, Mariusz I. Furmanek, Przemysław Biecek

We show, that lung X-ray masks created by following the contours of the heart, mediastinum, and diaphragm significantly underestimate lung regions and exclude substantial portions of the lungs from further assessment, which may result in numerous clinical errors.

Computed Tomography (CT)

Deep spatial context: when attention-based models meet spatial regression

1 code implementation18 Jan 2024 Paulina Tomaszewska, Elżbieta Sienkiewicz, Mai P. Hoang, Przemysław Biecek

The DSCon allows for a quantitative measure of the spatial context's role using three Spatial Context Measures: $SCM_{features}$, $SCM_{targets}$, $SCM_{residuals}$ to distinguish whether the spatial context is observable within the features of neighboring regions, their target values (attention scores) or residuals, respectively.

regression

survex: an R package for explaining machine learning survival models

1 code implementation30 Aug 2023 Mikołaj Spytek, Mateusz Krzyziński, Sophie Hanna Langbein, Hubert Baniecki, Marvin N. Wright, Przemysław Biecek

Due to their flexibility and superior performance, machine learning models frequently complement and outperform traditional statistical survival models.

Decision Making Explainable artificial intelligence

Exploration of the Rashomon Set Assists Trustworthy Explanations for Medical Data

1 code implementation22 Aug 2023 Katarzyna Kobylińska, Mateusz Krzyziński, Rafał Machowicz, Mariusz Adamek, Przemysław Biecek

If differently behaving models are detected in the Rashomon set, their combined analysis leads to more trustworthy conclusions, which is of vital importance for high-stakes applications such as medical applications.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

The Effect of Balancing Methods on Model Behavior in Imbalanced Classification Problems

1 code implementation30 Jun 2023 Adrian Stando, Mustafa Cavus, Przemysław Biecek

To capture these changes, Explainable Artificial Intelligence tools are used to compare models trained on datasets before and after balancing.

Explainable artificial intelligence imbalanced classification

SeFNet: Bridging Tabular Datasets with Semantic Feature Nets

1 code implementation20 Jun 2023 Katarzyna Woźnica, Piotr Wilczyński, Przemysław Biecek

In this paper, we present an example of SeFNet prepared for a collection of predictive tasks in healthcare, with the features' relations derived from the SNOMED-CT ontology.

Meta-Learning Semantic Similarity +1

Prevention is better than cure: a case study of the abnormalities detection in the chest

no code implementations18 May 2023 Weronika Hryniewska, Piotr Czarnecki, Jakub Wiśniewski, Przemysław Bombiński, Przemysław Biecek

Based on this use case, we show how to monitor data and model balance (fairness) throughout the life cycle of a predictive model, from data acquisition to parity analysis of model scores.

Fairness

Towards Evaluating Explanations of Vision Transformers for Medical Imaging

1 code implementation12 Apr 2023 Piotr Komorowski, Hubert Baniecki, Przemysław Biecek

Our findings provide insights into the applicability of ViT explanations in medical imaging and highlight the importance of using appropriate evaluation criteria for comparing them.

Decision Making Image Classification

SurvSHAP(t): Time-dependent explanations of machine learning survival models

1 code implementation23 Aug 2022 Mateusz Krzyziński, Mikołaj Spytek, Hubert Baniecki, Przemysław Biecek

Experiments on synthetic and medical data confirm that SurvSHAP(t) can detect variables with a time-dependent effect, and its aggregation is a better determinant of the importance of variables for a prediction than SurvLIME.

Time-to-Event Prediction

Explainable expected goal models for performance analysis in football analytics

1 code implementation14 Jun 2022 Mustafa Cavus, Przemysław Biecek

To measure the probability of a shot being a goal by the expected goal, several features are used to train an expected goal model which is based on the event and tracking football data.

Explainable artificial intelligence

LIMEcraft: Handcrafted superpixel selection and inspection for Visual eXplanations

1 code implementation15 Nov 2021 Weronika Hryniewska, Adrianna Grudzień, Przemysław Biecek

LIMEcraft enhances the process of explanation by allowing a user to interactively select semantically consistent areas and thoroughly examine the prediction for the image instance in case of many image features.

Fairness

Do not explain without context: addressing the blind spot of model explanations

no code implementations28 May 2021 Katarzyna Woźnica, Katarzyna Pękala, Hubert Baniecki, Wojciech Kretowicz, Elżbieta Sienkiewicz, Przemysław Biecek

The increasing number of regulations and expectations of predictive machine learning models, such as so called right to explanation, has led to a large number of methods promising greater interpretability.

BIG-bench Machine Learning Explainable Artificial Intelligence (XAI)

Enabling Machine Learning Algorithms for Credit Scoring -- Explainable Artificial Intelligence (XAI) methods for clear understanding complex predictive models

no code implementations14 Apr 2021 Przemysław Biecek, Marcin Chlebus, Janusz Gajda, Alicja Gosiewska, Anna Kozak, Dominik Ogonowski, Jakub Sztachelski, Piotr Wojewnik

What is even more important and valuable we also show how to boost advanced models using techniques which allow to interpret them and made them more accessible for credit risk practitioners, resolving the crucial obstacle in widespread deployment of more complex, 'black box' models like random forests, gradient boosted or extreme gradient boosted trees.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1

fairmodels: A Flexible Tool For Bias Detection, Visualization, And Mitigation

1 code implementation1 Apr 2021 Jakub Wiśniewski, Przemysław Biecek

The package includes a series of methods for bias mitigation that aim to diminish the discrimination in the model.

Bias Detection Fairness

Interpretable Meta-Measure for Model Performance

3 code implementations2 Jun 2020 Alicja Gosiewska, Katarzyna Woźnica, Przemysław Biecek

For example, the difference in performance for two models has no probabilistic interpretation, there is no reference point to indicate whether they represent a significant improvement, and it makes no sense to compare such differences between data sets.

Meta-Learning

Kleister: A novel task for Information Extraction involving Long Documents with Complex Layout

no code implementations4 Mar 2020 Filip Graliński, Tomasz Stanisławek, Anna Wróblewska, Dawid Lipiński, Agnieszka Kaliska, Paulina Rosalska, Bartosz Topolski, Przemysław Biecek

State-of-the-art solutions for Natural Language Processing (NLP) are able to capture a broad range of contexts, like the sentence-level context or document-level context for short documents.

named-entity-recognition Named Entity Recognition +2

Cannot find the paper you are looking for? You can Submit a new open access paper.