Search Results for author: Adam Perer

Found 15 papers, 6 papers with code

Using Visual Analytics to Interpret Predictive Machine Learning Models

no code implementations17 Jun 2016 Josua Krause, Adam Perer, Enrico Bertini

It is commonly believed that increasing the interpretability of a machine learning model may decrease its predictive power.

BIG-bench Machine Learning

Seq2Seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models

1 code implementation25 Apr 2018 Hendrik Strobelt, Sebastian Gehrmann, Michael Behrisch, Adam Perer, Hanspeter Pfister, Alexander M. Rush

In this work, we present a visual analysis tool that allows interaction with a trained sequence-to-sequence model through each stage of the translation process.

Translation

Debugging Sequence-to-Sequence Models with Seq2Seq-Vis

no code implementations WS 2018 Hendrik Strobelt, Sebastian Gehrmann, Michael Behrisch, Adam Perer, Hanspeter Pfister, Alex Rush, er

Neural attention-based sequence-to-sequence models (seq2seq) (Sutskever et al., 2014; Bahdanau et al., 2014) have proven to be accurate and robust for many sequence prediction tasks.

Attribute Translation

Regularizing Black-box Models for Improved Interpretability

1 code implementation NeurIPS 2020 Gregory Plumb, Maruan Al-Shedivat, Angel Alexander Cabrera, Adam Perer, Eric Xing, Ameet Talwalkar

Most of the work on interpretable machine learning has focused on designing either inherently interpretable models, which typically trade-off accuracy for interpretability, or post-hoc explanation systems, whose explanation quality can be unpredictable.

BIG-bench Machine Learning Interpretable Machine Learning

Ablate, Variate, and Contemplate: Visual Analytics for Discovering Neural Architectures

1 code implementation30 Jul 2019 Dylan Cashman, Adam Perer, Remco Chang, Hendrik Strobelt

In this paper, we present Rapid Exploration of Model Architectures and Parameters, or REMAP, a visual analytics tool that allows a model builder to discover a deep learning model quickly via exploration and rapid experimentation of neural network architectures.

TextEssence: A Tool for Interactive Analysis of Semantic Shifts Between Corpora

1 code implementation NAACL 2021 Denis Newman-Griffis, Venkatesh Sivaraman, Adam Perer, Eric Fosler-Lussier, Harry Hochheiser

Embeddings of words and concepts capture syntactic and semantic regularities of language; however, they have seen limited use as tools to study characteristics of different corpora and how they relate to one another.

Discovering and Validating AI Errors With Crowdsourced Failure Reports

no code implementations23 Sep 2021 Ángel Alexander Cabrera, Abraham J. Druck, Jason I. Hong, Adam Perer

AI systems can fail to learn important behaviors, leading to real-world issues like safety concerns and biases.

Characterizing Human Explanation Strategies to Inform the Design of Explainable AI for Building Damage Assessment

no code implementations4 Nov 2021 Donghoon Shin, Sachin Grover, Kenneth Holstein, Adam Perer

Explainable AI (XAI) is a promising means of supporting human-AI collaborations for high-stakes visual detection tasks, such as damage detection tasks from satellite imageries, as fully-automated approaches are unlikely to be perfectly safe and reliable.

Explainable Artificial Intelligence (XAI)

Emblaze: Illuminating Machine Learning Representations through Interactive Comparison of Embedding Spaces

1 code implementation5 Feb 2022 Venkatesh Sivaraman, Yiwei Wu, Adam Perer

Modern machine learning techniques commonly rely on complex, high-dimensional embedding representations to capture underlying structure in the data and improve performance.

BIG-bench Machine Learning

"Public(s)-in-the-Loop": Facilitating Deliberation of Algorithmic Decisions in Contentious Public Policy Domains

no code implementations22 Apr 2022 Hong Shen, Ángel Alexander Cabrera, Adam Perer, Jason Hong

This position paper offers a framework to think about how to better involve human influence in algorithmic decision-making of contentious public policy issues.

Decision Making Position

An Interactive Interpretability System for Breast Cancer Screening with Deep Learning

no code implementations30 Sep 2022 Yuzhe Lu, Adam Perer

Deep learning methods, in particular convolutional neural networks, have emerged as a powerful tool in medical image computing tasks.

Decision Making

Improving Human-AI Collaboration With Descriptions of AI Behavior

no code implementations6 Jan 2023 Ángel Alexander Cabrera, Adam Perer, Jason I. Hong

People work with AI systems to improve their decision making, but often under- or over-rely on AI predictions and perform worse than they would have unassisted.

Decision Making Satellite Image Classification

The Impact of Imperfect XAI on Human-AI Decision-Making

no code implementations25 Jul 2023 Katelyn Morrison, Philipp Spitzer, Violet Turri, Michelle Feng, Niklas Kühl, Adam Perer

Our findings reveal the influence of imperfect XAI and humans' level of expertise on their reliance on AI and human-AI team performance.

Decision Making Explainable Artificial Intelligence (XAI)

How Consistent are Clinicians? Evaluating the Predictability of Sepsis Disease Progression with Dynamics Models

2 code implementations10 Apr 2024 Unnseo Park, Venkatesh Sivaraman, Adam Perer

Reinforcement learning (RL) is a promising approach to generate treatment policies for sepsis patients in intensive care.

Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.