no code implementations • 4 Jul 2024 • Faisal Hamman, Pasan Dissanayake, Saumitra Mishra, Freddy Lecue, Sanghamitra Dutta
This work formalizes the challenge of fine-tuning multiplicity in Tabular LLMs and proposes a novel metric to quantify the robustness of individual predictions without expensive model retraining.
1 code implementation • 4 Jun 2024 • Wenzhuo Tang, Haitao Mao, Danial Dervovic, Ivan Brugere, Saumitra Mishra, Yuying Xie, Jiliang Tang
To achieve effective data scaling, we aim to develop a general model that is able to capture diverse data patterns of graphs and can be utilized to adaptively help the downstream tasks.
no code implementations • 3 Jun 2024 • Sanjay Kariyappa, Freddy Lécué, Saumitra Mishra, Christopher Pond, Daniele Magazzeni, Manuela Veloso
This paper proposes Progressive Inference - a framework to compute input attributions to explain the predictions of decoder-only sequence classification models.
no code implementations • 29 May 2024 • Tom Bewley, Salim I. Amoukou, Saumitra Mishra, Daniele Magazzeni, Manuela Veloso
We introduce T-CREx, a novel model-agnostic method for local and global counterfactual explanation (CE), which summarises recourse options for both individuals and groups in the form of human-readable rules.
no code implementations • 13 Jul 2023 • Emanuele Albini, Shubham Sharma, Saumitra Mishra, Danial Dervovic, Daniele Magazzeni
Explainable Artificial Intelligence (XAI) has received widespread interest in recent years, and two of the most popular types of explanations are feature attributions, and counterfactual explanations.
1 code implementation • 26 May 2023 • Dan Ley, Saumitra Mishra, Daniele Magazzeni
Counterfactual explanations have been widely studied in explainability, with a range of application dependent methods prominent in fairness, recourse and model understanding.
1 code implementation • 19 May 2023 • Faisal Hamman, Erfaun Noorani, Saumitra Mishra, Daniele Magazzeni, Sanghamitra Dutta
There is an emerging interest in generating robust counterfactual explanations that would remain valid if the model is updated or changed even slightly.
no code implementations • 9 Feb 2023 • Mahed Abroshan, Saumitra Mishra, Mohammad Mahdi Khalili
This composition can be represented in the form of a tree.
no code implementations • 16 Oct 2022 • Jing Ma, Ruocheng Guo, Saumitra Mishra, Aidong Zhang, Jundong Li
Counterfactual explanations promote explainability in machine learning models by answering the question "how should an input instance be perturbed to obtain a desired predicted label?".
no code implementations • 6 Jul 2022 • Sanghamitra Dutta, Jason Long, Saumitra Mishra, Cecilia Tilli, Daniele Magazzeni
In this work, we propose a novel strategy -- that we call RobX -- to generate robust counterfactuals for tree-based ensembles, e. g., XGBoost.
no code implementations • 14 Apr 2022 • Dan Ley, Saumitra Mishra, Daniele Magazzeni
Counterfactual explanations have been widely studied in explainability, with a range of application dependent methods emerging in fairness, recourse and model understanding.
no code implementations • 30 Oct 2021 • Saumitra Mishra, Sanghamitra Dutta, Jason Long, Daniele Magazzeni
There exist several methods that aim to address the crucial task of understanding the behaviour of AI/ML models.
no code implementations • 29 Sep 2021 • Mahed Abroshan, Saumitra Mishra, Mohammad Mahdi Khalili
One approach for interpreting black-box machine learning models is to find a global approximation of the model using simple interpretable functions, which is called a metamodel (a model of the model).
1 code implementation • 15 May 2020 • Saumitra Mishra, Emmanouil Benetos, Bob L. Sturm, Simon Dixon
One way to analyse the behaviour of machine learning models is through local explanations that highlight input features that maximally influence model predictions.
no code implementations • 21 Apr 2019 • Saumitra Mishra, Daniel Stoller, Emmanouil Benetos, Bob L. Sturm, Simon Dixon
However, this requires a careful selection of hyper-parameters to generate interpretable examples for each neuron of interest, and current methods rely on a manual, qualitative evaluation of each setting, which is prohibitively slow.