Search Results for author: Carlos Scheidegger

Found 13 papers, 6 papers with code

Persistent Classification: A New Approach to Stability of Data and Adversarial Examples

no code implementations11 Apr 2024 Brian Bell, Michael Geyer, David Glickenstein, Keaton Hamm, Carlos Scheidegger, Amanda Fernandez, Juston Moore

This article proposes a new framework for studying adversarial examples that does not depend directly on the distance to the decision boundary.

UnProjection: Leveraging Inverse-Projections for Visual Analytics of High-Dimensional Data

no code implementations2 Nov 2021 Mateus Espadoto, Gabriel Appleby, Ashley Suh, Dylan Cashman, MingWei Li, Carlos Scheidegger, Erik W Anderson, Remco Chang, Alexandru C Telea

Projection techniques are often used to visualize high-dimensional data, allowing users to better understand the overall structure of multi-dimensional spaces on a 2D screen.

Vocal Bursts Intensity Prediction

Comparing Deep Neural Nets with UMAP Tour

no code implementations18 Oct 2021 MingWei Li, Carlos Scheidegger

Neural networks should be interpretable to humans.

The ANTARES Astronomical Time-Domain Event Broker

no code implementations24 Nov 2020 Thomas Matheson, Carl Stubens, Nicholas Wolf, Chien-Hsiu Lee, Gautham Narayan, Abhijit Saha, Adam Scott, Monika Soraisam, Adam S. Bolton, Benjamin Hauger, David R. Silva, John Kececioglu, Carlos Scheidegger, Richard Snodgrass, Patrick D. Aleo, Eric Evans-Jacquez, Navdeep Singh, Zhe Wang, Shuo Yang, Zhenge Zhao

We describe the Arizona-NOIRLab Temporal Analysis and Response to Events System (ANTARES), a software instrument designed to process large-scale streams of astronomical time-domain alerts.

Instrumentation and Methods for Astrophysics

Disentangling Influence: Using Disentangled Representations to Audit Model Predictions

1 code implementation NeurIPS 2019 Charles T. Marx, Richard Lanas Phillips, Sorelle A. Friedler, Carlos Scheidegger, Suresh Venkatasubramanian

Specifically, we show that disentangled representations provide a mechanism to identify proxy features in the dataset, while allowing an explicit computation of feature influence on either individual outcomes or aggregate-level outcomes.

Assessing the Local Interpretability of Machine Learning Models

no code implementations9 Feb 2019 Dylan Slack, Sorelle A. Friedler, Carlos Scheidegger, Chitradeep Dutta Roy

Through a user study with 1, 000 participants, we test whether humans perform well on tasks that mimic the definitions of simulatability and "what if" local explainability on models that are typically considered locally interpretable.

BIG-bench Machine Learning

Fairness in representation: quantifying stereotyping as a representational harm

no code implementations28 Jan 2019 Mohsen Abbasi, Sorelle A. Friedler, Carlos Scheidegger, Suresh Venkatasubramanian

While harms of allocation have been increasingly studied as part of the subfield of algorithmic fairness, harms of representation have received considerably less attention.

BIG-bench Machine Learning Fairness

A comparative study of fairness-enhancing interventions in machine learning

4 code implementations13 Feb 2018 Sorelle A. Friedler, Carlos Scheidegger, Suresh Venkatasubramanian, Sonam Choudhary, Evan P. Hamilton, Derek Roth

Concretely, we present the results of an open benchmark we have developed that lets us compare a number of different algorithms under a variety of fairness measures, and a large number of existing datasets.

BIG-bench Machine Learning Fairness

Runaway Feedback Loops in Predictive Policing

1 code implementation29 Jun 2017 Danielle Ensign, Sorelle A. Friedler, Scott Neville, Carlos Scheidegger, Suresh Venkatasubramanian

Predictive policing systems are increasingly used to determine how to allocate police across a city in order to best prevent crime.

On the (im)possibility of fairness

2 code implementations23 Sep 2016 Sorelle A. Friedler, Carlos Scheidegger, Suresh Venkatasubramanian

We show that in order to prove desirable properties of the entire decision-making process, different mechanisms for fairness require different assumptions about the nature of the mapping from construct space to decision space.

Decision Making Fairness

Auditing Black-box Models for Indirect Influence

2 code implementations23 Feb 2016 Philip Adler, Casey Falk, Sorelle A. Friedler, Gabriel Rybeck, Carlos Scheidegger, Brandon Smith, Suresh Venkatasubramanian

It is therefore hard to acquire a deeper understanding of model behavior, and in particular how different features influence the model prediction.

Attribute feature selection

Cannot find the paper you are looking for? You can Submit a new open access paper.