1 code implementation • 2 Apr 2024 • Krzysztof Jankowski, Bartlomiej Sobieski, Mateusz Kwiatkowski, Jakub Szulc, Michal Janik, Hubert Baniecki, Przemyslaw Biecek
Foundation models have emerged as pivotal tools, tackling many complex tasks through pre-training on vast datasets and subsequent fine-tuning for specific applications.
1 code implementation • 15 Mar 2024 • Sophie Hanna Langbein, Mateusz Krzyziński, Mikołaj Spytek, Hubert Baniecki, Przemysław Biecek, Marvin N. Wright
With the spread and rapid advancement of black box machine learning models, the field of interpretable machine learning (IML) or explainable artificial intelligence (XAI) has become increasingly important over the last decade.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +4
no code implementations • 12 Mar 2024 • Vladimir Zaigrajew, Hubert Baniecki, Lukasz Tulczyjew, Agata M. Wijata, Jakub Nalepa, Nicolas Longépé, Przemyslaw Biecek
Remote sensing (RS) applications in the space domain demand machine learning (ML) models that are reliable, robust, and quality-assured, making red teaming a vital approach for identifying and exposing potential flaws and biases.
1 code implementation • 8 Nov 2023 • Hubert Baniecki, Maciej Chrabaszcz, Andreas Holzinger, Bastian Pfeifer, Anna Saranti, Przemyslaw Biecek
Evaluating explanations of image classifiers regarding ground truth, e. g. segmentation masks defined by human perception, primarily evaluates the quality of the models under consideration rather than the explanation methods themselves.
1 code implementation • 30 Aug 2023 • Mikołaj Spytek, Mateusz Krzyziński, Sophie Hanna Langbein, Hubert Baniecki, Marvin N. Wright, Przemysław Biecek
Due to their flexibility and superior performance, machine learning models frequently complement and outperform traditional statistical survival models.
1 code implementation • 15 Jul 2023 • Bastian Pfeifer, Mateusz Krzyzinski, Hubert Baniecki, Anna Saranti, Andreas Holzinger, Przemyslaw Biecek
Explainable AI (XAI) is an increasingly important area of machine learning research, which aims to make black-box models transparent and interpretable.
1 code implementation • 6 Jun 2023 • Hubert Baniecki, Przemyslaw Biecek
Explainable artificial intelligence (XAI) methods are portrayed as a remedy for debugging and trusting statistical and deep learning models, as well as interpreting their predictions.
1 code implementation • 12 Apr 2023 • Piotr Komorowski, Hubert Baniecki, Przemysław Biecek
Our findings provide insights into the applicability of ViT explanations in medical imaging and highlight the importance of using appropriate evaluation criteria for comparing them.
3 code implementations • 17 Mar 2023 • Hubert Baniecki, Bartlomiej Sobieski, Patryk Szatkowski, Przemyslaw Bombinski, Przemyslaw Biecek
Time-to-event prediction, e. g. cancer survival analysis or hospital length of stay, is a highly prominent machine learning task in medical and healthcare applications.
1 code implementation • 26 Feb 2023 • Przemyslaw Biecek, Hubert Baniecki, Mateusz Krzyzinski, Dianne Cook
The usual goal of supervised learning is to find the best model, the one that optimizes a particular performance measure.
1 code implementation • 23 Aug 2022 • Mateusz Krzyziński, Mikołaj Spytek, Hubert Baniecki, Przemysław Biecek
Experiments on synthetic and medical data confirm that SurvSHAP(t) can detect variables with a time-dependent effect, and its aggregation is a better determinant of the importance of variables for a prediction than SurvLIME.
1 code implementation • 26 Aug 2021 • Bastian Pfeifer, Hubert Baniecki, Anna Saranti, Przemyslaw Biecek, Andreas Holzinger
To demonstrate a concrete application example, we focus on bioinformatics, systems biology and particularly biomedicine, but the presented methodology is applicable in many other domains as well.
no code implementations • 28 May 2021 • Katarzyna Woźnica, Katarzyna Pękala, Hubert Baniecki, Wojciech Kretowicz, Elżbieta Sienkiewicz, Przemysław Biecek
The increasing number of regulations and expectations of predictive machine learning models, such as so called right to explanation, has led to a large number of methods promising greater interpretability.
BIG-bench Machine Learning Explainable Artificial Intelligence (XAI)
1 code implementation • 26 May 2021 • Hubert Baniecki, Wojciech Kretowicz, Przemyslaw Biecek
We believe this to be the first work using a genetic algorithm for manipulating explanations, which is transferable as it generalizes both ways: in a model-agnostic and an explanation-agnostic manner.
1 code implementation • 28 Dec 2020 • Hubert Baniecki, Wojciech Kretowicz, Piotr Piatyszek, Jakub Wisniewski, Przemyslaw Biecek
The increasing amount of available data, computing power, and the constant pursuit for higher performance results in the growing complexity of predictive models.
1 code implementation • 1 May 2020 • Hubert Baniecki, Dariusz Parzych, Przemyslaw Biecek
We conduct a user study to evaluate the usefulness of IEMA, which indicates that an interactive sequential analysis of a model increases the performance and confidence of human decision making.