Search Results for author: Przemysław Biecek

Found 17 papers, 10 papers with code

SurvSHAP(t): Time-dependent explanations of machine learning survival models

1 code implementation23 Aug 2022 Mateusz Krzyziński, Mikołaj Spytek, Hubert Baniecki, Przemysław Biecek

Experiments on synthetic and medical data confirm that SurvSHAP(t) can detect variables with a time-dependent effect, and its aggregation is a better determinant of the importance of variables for a prediction than SurvLIME.

Time-to-Event Prediction

Explainable expected goal models for performance analysis in football analytics

1 code implementation14 Jun 2022 Mustafa Cavus, Przemysław Biecek

To measure the probability of a shot being a goal by the expected goal, several features are used to train an expected goal model which is based on the event and tracking football data.

Explainable artificial intelligence

LIMEcraft: Handcrafted superpixel selection and inspection for Visual eXplanations

1 code implementation15 Nov 2021 Weronika Hryniewska, Adrianna Grudzień, Przemysław Biecek

LIMEcraft enhances the process of explanation by allowing a user to interactively select semantically consistent areas and thoroughly examine the prediction for the image instance in case of many image features.

Fairness

Do not explain without context: addressing the blind spot of model explanations

no code implementations28 May 2021 Katarzyna Woźnica, Katarzyna Pękala, Hubert Baniecki, Wojciech Kretowicz, Elżbieta Sienkiewicz, Przemysław Biecek

The increasing number of regulations and expectations of predictive machine learning models, such as so called right to explanation, has led to a large number of methods promising greater interpretability.

BIG-bench Machine Learning

Enabling Machine Learning Algorithms for Credit Scoring -- Explainable Artificial Intelligence (XAI) methods for clear understanding complex predictive models

no code implementations14 Apr 2021 Przemysław Biecek, Marcin Chlebus, Janusz Gajda, Alicja Gosiewska, Anna Kozak, Dominik Ogonowski, Jakub Sztachelski, Piotr Wojewnik

What is even more important and valuable we also show how to boost advanced models using techniques which allow to interpret them and made them more accessible for credit risk practitioners, resolving the crucial obstacle in widespread deployment of more complex, 'black box' models like random forests, gradient boosted or extreme gradient boosted trees.

Explainable artificial intelligence

fairmodels: A Flexible Tool For Bias Detection, Visualization, And Mitigation

1 code implementation1 Apr 2021 Jakub Wiśniewski, Przemysław Biecek

The package includes a series of methods for bias mitigation that aim to diminish the discrimination in the model.

Bias Detection Fairness

Interpretable Meta-Measure for Model Performance

3 code implementations2 Jun 2020 Alicja Gosiewska, Katarzyna Woźnica, Przemysław Biecek

For example, the difference in performance for two models has no probabilistic interpretation, there is no reference point to indicate whether they represent a significant improvement, and it makes no sense to compare such differences between data sets.

Meta-Learning

Kleister: A novel task for Information Extraction involving Long Documents with Complex Layout

no code implementations4 Mar 2020 Filip Graliński, Tomasz Stanisławek, Anna Wróblewska, Dawid Lipiński, Agnieszka Kaliska, Paulina Rosalska, Bartosz Topolski, Przemysław Biecek

State-of-the-art solutions for Natural Language Processing (NLP) are able to capture a broad range of contexts, like the sentence-level context or document-level context for short documents.

named-entity-recognition Named Entity Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.