Search Results for author: Sérgio Jesus

Found 7 papers, 3 papers with code

Cost-Sensitive Learning to Defer to Multiple Experts with Workload Constraints

no code implementations11 Mar 2024 Jean V. Alves, Diogo Leitão, Sérgio Jesus, Marco O. P. Sampaio, Javier Liébana, Pedro Saleiro, Mário A. T. Figueiredo, Pedro Bizarro

Learning to defer (L2D) aims to improve human-AI collaboration systems by learning how to defer decisions to humans when they are more likely to be correct than an ML classifier.

Fraud Detection

FiFAR: A Fraud Detection Dataset for Learning to Defer

1 code implementation20 Dec 2023 Jean V. Alves, Diogo Leitão, Sérgio Jesus, Marco O. P. Sampaio, Pedro Saleiro, Mário A. T. Figueiredo, Pedro Bizarro

Financial fraud detection is a high-stakes setting where algorithms and human experts often work in tandem; however, there are no publicly available datasets for L2D concerning this important application of human-AI teaming.

Benchmarking Decision Making +1

A Case Study on Designing Evaluations of ML Explanations with Simulated User Studies

no code implementations15 Feb 2023 Ada Martin, Valerie Chen, Sérgio Jesus, Pedro Saleiro

We hope that this work motivates further study of when and how SimEvals should be used to aid in the design of real-world evaluations.

Decision Making Fraud Detection

Turning the Tables: Biased, Imbalanced, Dynamic Tabular Datasets for ML Evaluation

2 code implementations24 Nov 2022 Sérgio Jesus, José Pombal, Duarte Alves, André Cruz, Pedro Saleiro, Rita P. Ribeiro, João Gama, Pedro Bizarro

The suite was generated by applying state-of-the-art tabular data generation techniques on an anonymized, real-world bank account opening fraud detection dataset.

Fairness Fraud Detection +1

FairGBM: Gradient Boosting with Fairness Constraints

1 code implementation16 Sep 2022 André F Cruz, Catarina Belém, Sérgio Jesus, João Bravo, Pedro Saleiro, Pedro Bizarro

Tabular data is prevalent in many high-stakes domains, such as financial services or public policy.

Decision Making Fairness

On the Importance of Application-Grounded Experimental Design for Evaluating Explainable ML Methods

no code implementations24 Jun 2022 Kasun Amarasinghe, Kit T. Rodolfa, Sérgio Jesus, Valerie Chen, Vladimir Balayan, Pedro Saleiro, Pedro Bizarro, Ameet Talwalkar, Rayid Ghani

Most existing evaluations of explainable machine learning (ML) methods rely on simplifying assumptions or proxies that do not reflect real-world use cases; the handful of more robust evaluations on real-world settings have shortcomings in their design, resulting in limited conclusions of methods' real-world utility.

Experimental Design Fraud Detection

How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations

no code implementations21 Jan 2021 Sérgio Jesus, Catarina Belém, Vladimir Balayan, João Bento, Pedro Saleiro, Pedro Bizarro, João Gama

We conducted an experiment following XAI Test to evaluate three popular post-hoc explanation methods -- LIME, SHAP, and TreeInterpreter -- on a real-world fraud detection task, with real data, a deployed ML model, and fraud analysts.

Decision Making Explainable Artificial Intelligence (XAI) +1

Cannot find the paper you are looking for? You can Submit a new open access paper.