Search Results for author: Enrico Bertini

Found 11 papers, 7 papers with code

An Exploration And Validation of Visual Factors in Understanding Classification Rule Sets

no code implementations19 Sep 2021 Jun Yuan, Oded Nov, Enrico Bertini

Rule sets are typically presented as a text-based list of logical statements (rules).

AdViCE: Aggregated Visual Counterfactual Explanations for Machine Learning Model Validation

1 code implementation12 Sep 2021 Oscar Gomez, Steffen Holter, Jun Yuan, Enrico Bertini

Rapid improvements in the performance of machine learning models have pushed them to the forefront of data-driven decision-making.

Decision Making

Visualizing Rule Sets: Exploration and Validation of a Design Space

no code implementations1 Mar 2021 Jun Yuan, Oded Nov, Enrico Bertini

Rule sets are typically presented as a text-based list of logical statements (rules).

Towards Ground Truth Explainability on Tabular Data

1 code implementation20 Jul 2020 Brian Barr, Ke Xu, Claudio Silva, Enrico Bertini, Robert Reilly, C. Bayan Bruss, Jason D. Wittenbach

In data science, there is a long history of using synthetic data for method development, feature selection and feature engineering.

Feature Engineering Feature Selection

PipelineProfiler: A Visual Analytics Tool for the Exploration of AutoML Pipelines

1 code implementation arXiv 2020 Jorge Piazentin Ono, Sonia Castelo, Roque Lopez, Enrico Bertini, Juliana Freire, Claudio Silva

In recent years, a wide variety of automated machine learning (AutoML) methods have been proposed to search and generate end-to-end learning pipelines.

Human-Computer Interaction

Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs

1 code implementation23 Apr 2020 Sungsoo Ray Hong, Jessica Hullman, Enrico Bertini

As the use of machine learning (ML) models in product development and data-driven decision-making processes became pervasive in many domains, people's focus on building a well-performing model has increasingly shifted to understanding how their model works.

Decision Making

ViCE: Visual Counterfactual Explanations for Machine Learning Models

1 code implementation5 Mar 2020 Oscar Gomez, Steffen Holter, Jun Yuan, Enrico Bertini

The continued improvements in the predictive accuracy of machine learning models have allowed for their widespread practical application.

Visus: An Interactive System for Automatic Machine Learning Model Building and Curation

no code implementations5 Jul 2019 Aécio Santos, Sonia Castelo, Cristian Felix, Jorge Piazentin Ono, Bowen Yu, Sungsoo Hong, Cláudio T. Silva, Enrico Bertini, Juliana Freire

In this paper, we present Visus, a system designed to support the model building process and curation of ML data processing pipelines generated by AutoML systems.

AutoML

RuleMatrix: Visualizing and Understanding Classifiers with Rules

1 code implementation17 Jul 2018 Yao Ming, Huamin Qu, Enrico Bertini

With the growing adoption of machine learning techniques, there is a surge of research interest towards making machine learning systems more transparent and interpretable.

A Workflow for Visual Diagnostics of Binary Classifiers using Instance-Level Explanations

1 code implementation4 May 2017 Josua Krause, Aritra Dasgupta, Jordan Swartz, Yindalon Aphinyanaphongs, Enrico Bertini

Human-in-the-loop data analysis applications necessitate greater transparency in machine learning models for experts to understand and trust their decisions.

Using Visual Analytics to Interpret Predictive Machine Learning Models

no code implementations17 Jun 2016 Josua Krause, Adam Perer, Enrico Bertini

It is commonly believed that increasing the interpretability of a machine learning model may decrease its predictive power.

Cannot find the paper you are looking for? You can Submit a new open access paper.