Search Results for author: Daniele Magazzeni

Found 34 papers, 4 papers with code

REFRESH: Responsible and Efficient Feature Reselection Guided by SHAP Values

no code implementations13 Mar 2024 Shubham Sharma, Sanghamitra Dutta, Emanuele Albini, Freddy Lecue, Daniele Magazzeni, Manuela Veloso

In this paper, we introduce the problem of feature \emph{reselection}, so that features can be selected with respect to secondary model performance characteristics efficiently even after a feature selection process has been done with respect to a primary objective.

Fairness feature selection

Fair Coresets via Optimal Transport

no code implementations9 Nov 2023 Zikai Xiong, Niccolò Dalmasso, Shubham Sharma, Freddy Lecue, Daniele Magazzeni, Vamsi K. Potluru, Tucker Balch, Manuela Veloso

In this work, we present fair Wasserstein coresets (FWC), a novel coreset approach which generates fair synthetic representative samples along with sample-level weights to be used in downstream learning tasks.

Clustering Decision Making +1

Accelerating Cutting-Plane Algorithms via Reinforcement Learning Surrogates

no code implementations17 Jul 2023 Kyle Mana, Fernando Acero, Stephen Mak, Parisa Zehtabi, Michael Cashmore, Daniele Magazzeni, Manuela Veloso

Discrete optimization belongs to the set of $\mathcal{NP}$-hard problems, spanning fields such as mixed-integer programming and combinatorial optimization.

Combinatorial Optimization Management +2

On the Connection between Game-Theoretic Feature Attributions and Counterfactual Explanations

no code implementations13 Jul 2023 Emanuele Albini, Shubham Sharma, Saumitra Mishra, Danial Dervovic, Daniele Magazzeni

Explainable Artificial Intelligence (XAI) has received widespread interest in recent years, and two of the most popular types of explanations are feature attributions, and counterfactual explanations.

counterfactual Counterfactual Explanation +3

SHAP@k:Efficient and Probably Approximately Correct (PAC) Identification of Top-k Features

no code implementations10 Jul 2023 Sanjay Kariyappa, Leonidas Tsepenekas, Freddy Lécué, Daniele Magazzeni

While any method to compute SHAP values with uncertainty estimates (such as KernelSHAP and SamplingSHAP) can be trivially adapted to solve TkIP, doing so is highly sample inefficient.

Feature Importance Multi-Armed Bandits

GLOBE-CE: A Translation-Based Approach for Global Counterfactual Explanations

1 code implementation26 May 2023 Dan Ley, Saumitra Mishra, Daniele Magazzeni

Counterfactual explanations have been widely studied in explainability, with a range of application dependent methods prominent in fairness, recourse and model understanding.

counterfactual Fairness +1

Robust Counterfactual Explanations for Neural Networks With Probabilistic Guarantees

1 code implementation19 May 2023 Faisal Hamman, Erfaun Noorani, Saumitra Mishra, Daniele Magazzeni, Sanghamitra Dutta

There is an emerging interest in generating robust counterfactual explanations that would remain valid if the model is updated or changed even slightly.

counterfactual valid

Bayesian Hierarchical Models for Counterfactual Estimation

no code implementations21 Jan 2023 Natraj Raman, Daniele Magazzeni, Sameena Shah

Counterfactual explanations utilize feature perturbations to analyze the outcome of an original decision and recommend an actionable recourse.

counterfactual Fairness +1

Learn to explain yourself, when you can: Equipping Concept Bottleneck Models with the ability to abstain on their concept predictions

no code implementations21 Nov 2022 Joshua Lockhart, Daniele Magazzeni, Manuela Veloso

The Concept Bottleneck Models (CBMs) of Koh et al. [2020] provide a means to ensure that a neural network based classifier bases its predictions solely on human understandable concepts.

Rethinking Log Odds: Linear Probability Modelling and Expert Advice in Interpretable Machine Learning

no code implementations11 Nov 2022 Danial Dervovic, Nicolas Marchesotti, Freddy Lecue, Daniele Magazzeni

We introduce a family of interpretable machine learning models, with two broad additions: Linearised Additive Models (LAMs) which replace the ubiquitous logistic link function in General Additive Models (GAMs); and SubscaleHedge, an expert advice algorithm for combining base models trained on subsets of features called subscales.

Additive models Binary Classification +1

Towards learning to explain with concept bottleneck models: mitigating information leakage

no code implementations7 Nov 2022 Joshua Lockhart, Nicolas Marchesotti, Daniele Magazzeni, Manuela Veloso

Concept bottleneck models perform classification by first predicting which of a list of human provided concepts are true about a datapoint.

Feature Importance for Time Series Data: Improving KernelSHAP

no code implementations5 Oct 2022 Mattia Villani, Joshua Lockhart, Daniele Magazzeni

Feature importance techniques have enjoyed widespread attention in the explainable AI literature as a means of determining how trained machine learning models make their predictions.

Event Detection Feature Importance +2

Robust Counterfactual Explanations for Tree-Based Ensembles

no code implementations6 Jul 2022 Sanghamitra Dutta, Jason Long, Saumitra Mishra, Cecilia Tilli, Daniele Magazzeni

In this work, we propose a novel strategy -- that we call RobX -- to generate robust counterfactuals for tree-based ensembles, e. g., XGBoost.

counterfactual

Global Counterfactual Explanations: Investigations, Implementations and Improvements

no code implementations14 Apr 2022 Dan Ley, Saumitra Mishra, Daniele Magazzeni

Counterfactual explanations have been widely studied in explainability, with a range of application dependent methods emerging in fairness, recourse and model understanding.

counterfactual Counterfactual Explanation +1

Asynchronous Collaborative Learning Across Data Silos

no code implementations23 Mar 2022 Tiffany Tuor, Joshua Lockhart, Daniele Magazzeni

Our proposed approach enhances conventional federated learning techniques to make them suitable for this asynchronous training in this intra-organisation, cross-silo setting.

BIG-bench Machine Learning Federated Learning

Explaining Preference-driven Schedules: the EXPRES Framework

no code implementations16 Mar 2022 Alberto Pozanco, Francesca Mosca, Parisa Zehtabi, Daniele Magazzeni, Sarit Kraus

The EXPRES framework consists of: (i) an explanation generator that, based on a Mixed-Integer Linear Programming model, finds the best set of reasons that can explain an unsatisfied preference; and (ii) an explanation parser, which translates the generated explanations into human interpretable ones.

Scheduling

Optimal Admission Control for Multiclass Queues with Time-Varying Arrival Rates via State Abstraction

no code implementations14 Mar 2022 Marc Rigter, Danial Dervovic, Parisa Hassanzadeh, Jason Long, Parisa Zehtabi, Daniele Magazzeni

To improve the scalability of our approach to a greater number of task classes, we present an approximation based on state abstraction.

Counterfactual Shapley Additive Explanations

2 code implementations27 Oct 2021 Emanuele Albini, Jason Long, Danial Dervovic, Daniele Magazzeni

Feature attributions are a common paradigm for model explanations due to their simplicity in assigning a single numeric score for each input feature to a model.

counterfactual Counterfactual Explanation +2

How Robust are Limit Order Book Representations under Data Perturbation?

1 code implementation10 Oct 2021 Yufei Wu, Mahmoud Mahfouz, Daniele Magazzeni, Manuela Veloso

The success of machine learning models in the financial domain is highly reliant on the quality of the data representation.

Towards Robust Representation of Limit Orders Books for Deep Learning Models

no code implementations10 Oct 2021 Yufei Wu, Mahmoud Mahfouz, Daniele Magazzeni, Manuela Veloso

The success of deep learning-based limit order book forecasting models is highly dependent on the quality and the robustness of the input data representation.

BIG-bench Machine Learning

Graph Reasoning with Context-Aware Linearization for Interpretable Fact Extraction and Verification

no code implementations EMNLP (FEVER) 2021 Neema Kotonya, Thomas Spooner, Daniele Magazzeni, Francesca Toni

This paper presents an end-to-end system for fact extraction and verification using textual and tabular evidence, the performance of which we demonstrate on the FEVEROUS dataset.

Graph Attention Multi-Task Learning

Counterfactual Explanations for Arbitrary Regression Models

no code implementations29 Jun 2021 Thomas Spooner, Danial Dervovic, Jason Long, Jon Shepard, Jiahao Chen, Daniele Magazzeni

We present a new method for counterfactual explanations (CFEs) based on Bayesian optimisation that applies to both classification and regression models.

Bayesian Optimisation counterfactual +1

Contrastive Explanations of Plans Through Model Restrictions

no code implementations29 Mar 2021 Benjamin Krarup, Senka Krivic, Daniele Magazzeni, Derek Long, Michael Cashmore, David E. Smith

We formally define model-based compilations in PDDL2. 1 of each constraint derived from a user question in the taxonomy, and empirically evaluate the compilations in terms of computational complexity.

Towards Efficient Anytime Computation and Execution of Decoupled Robustness Envelopes for Temporal Plans

no code implementations17 Nov 2019 Michael Cashmore, Alessandro Cimatti, Daniele Magazzeni, Andrea Micheli, Parisa Zehtabi

One of the major limitations for the employment of model-based planning and scheduling in practical applications is the need of costly re-planning when an incongruence between the observed reality and the formal model is encountered during execution.

Scheduling

Towards Explainable AI Planning as a Service

no code implementations14 Aug 2019 Michael Cashmore, Anna Collins, Benjamin Krarup, Senka Krivic, Daniele Magazzeni, David Smith

Explainable AI is an important area of research within which Explainable Planning is an emerging topic.

Towards Providing Explanations for AI Planner Decisions

no code implementations15 Oct 2018 Rita Borgo, Michael Cashmore, Daniele Magazzeni

In order to engender trust in AI, humans must understand what an AI system is trying to achieve, and why.

Explainable Artificial Intelligence (XAI)

Explainable Security

no code implementations11 Jul 2018 Luca Viganò, Daniele Magazzeni

The Defense Advanced Research Projects Agency (DARPA) recently launched the Explainable Artificial Intelligence (XAI) program that aims to create a suite of new AI techniques that enable end users to understand, appropriately trust, and effectively manage the emerging generation of AI systems.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

Explainable Planning

no code implementations29 Sep 2017 Maria Fox, Derek Long, Daniele Magazzeni

As AI is increasingly being adopted into application solutions, the challenge of supporting interaction with humans is becoming more apparent.

CASP Solutions for Planning in Hybrid Domains

no code implementations12 Apr 2017 Marcello Balduccini, Daniele Magazzeni, Marco Maratea, Emily LeBlanc

CASP is an extension of ASP that allows for numerical constraints to be added in the rules.

PDDL+ Planning via Constraint Answer Set Programming

no code implementations31 Aug 2016 Marcello Balduccini, Daniele Magazzeni, Marco Maratea

PDDL+ is an extension of PDDL that enables modelling planning domains with mixed discrete-continuous dynamics.

Plan-based Policies for Efficient Multiple Battery Load Management

no code implementations23 Jan 2014 Maria Fox, Derek Long, Daniele Magazzeni

Application of the approach leads to construction of policies that, in simulation, significantly outperform those that are currently in use and the best published solutions to the battery management problem.

Management

Cannot find the paper you are looking for? You can Submit a new open access paper.