Search Results for author: Salvatore Orlando

Found 7 papers, 3 papers with code

ILMART: Interpretable Ranking with Constrained LambdaMART

1 code implementation1 Jun 2022 Claudio Lucchese, Franco Maria Nardini, Salvatore Orlando, Raffaele Perego, Alberto Veneri

Interpretable Learning to Rank (LtR) is an emerging field within the research area of explainable AI, aiming at developing intelligible and accurate predictive models.

Learning-To-Rank

EiFFFeL: Enforcing Fairness in Forests by Flipping Leaves

no code implementations29 Dec 2021 Seyum Assefa Abebe, Claudio Lucchese, Salvatore Orlando

Nowadays Machine Learning (ML) techniques are extensively adopted in many socially sensitive systems, thus requiring to carefully study the fairness of the decisions taken by such systems.

Fairness

Beyond Robustness: Resilience Verification of Tree-Based Classifiers

no code implementations5 Dec 2021 Stefano Calzavara, Lorenzo Cazzaro, Claudio Lucchese, Federico Marcuzzi, Salvatore Orlando

In this paper we criticize the robustness measure traditionally employed to assess the performance of machine learning models deployed in adversarial settings.

Learning Early Exit Strategies for Additive Ranking Ensembles

1 code implementation6 May 2021 Francesco Busolin, Claudio Lucchese, Franco Maria Nardini, Salvatore Orlando, Raffaele Perego, Salvatore Trani

Modern search engine ranking pipelines are commonly based on large machine-learned ensembles of regression trees.

Query-level Early Exit for Additive Learning-to-Rank Ensembles

no code implementations30 Apr 2020 Claudio Lucchese, Franco Maria Nardini, Salvatore Orlando, Raffaele Perego, Salvatore Trani

In this paper, we investigate the novel problem of \textit{query-level early exiting}, aimed at deciding the profitability of early stopping the traversal of the ranking ensemble for all the candidate documents to be scored for a query, by simply returning a ranking based on the additive scores computed by a limited portion of the ensemble.

Learning-To-Rank

Treant: Training Evasion-Aware Decision Trees

1 code implementation2 Jul 2019 Stefano Calzavara, Claudio Lucchese, Gabriele Tolomei, Seyum Assefa Abebe, Salvatore Orlando

Despite its success and popularity, machine learning is now recognized as vulnerable to evasion attacks, i. e., carefully crafted perturbations of test inputs designed to force prediction errors.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.