Search Results for author: Adam Perer

Found 12 papers, 5 papers with code

An Interactive Interpretability System for Breast Cancer Screening with Deep Learning

no code implementations30 Sep 2022 Yuzhe Lu, Adam Perer

Deep learning methods, in particular convolutional neural networks, have emerged as a powerful tool in medical image computing tasks.

Decision Making

"Public(s)-in-the-Loop": Facilitating Deliberation of Algorithmic Decisions in Contentious Public Policy Domains

no code implementations22 Apr 2022 Hong Shen, Ángel Alexander Cabrera, Adam Perer, Jason Hong

This position paper offers a framework to think about how to better involve human influence in algorithmic decision-making of contentious public policy issues.

Decision Making

Emblaze: Illuminating Machine Learning Representations through Interactive Comparison of Embedding Spaces

1 code implementation5 Feb 2022 Venkatesh Sivaraman, Yiwei Wu, Adam Perer

Modern machine learning techniques commonly rely on complex, high-dimensional embedding representations to capture underlying structure in the data and improve performance.

BIG-bench Machine Learning

Characterizing Human Explanation Strategies to Inform the Design of Explainable AI for Building Damage Assessment

no code implementations4 Nov 2021 Donghoon Shin, Sachin Grover, Kenneth Holstein, Adam Perer

Explainable AI (XAI) is a promising means of supporting human-AI collaborations for high-stakes visual detection tasks, such as damage detection tasks from satellite imageries, as fully-automated approaches are unlikely to be perfectly safe and reliable.

Discovering and Validating AI Errors With Crowdsourced Failure Reports

no code implementations23 Sep 2021 Ángel Alexander Cabrera, Abraham J. Druck, Jason I. Hong, Adam Perer

AI systems can fail to learn important behaviors, leading to real-world issues like safety concerns and biases.

TextEssence: A Tool for Interactive Analysis of Semantic Shifts Between Corpora

1 code implementation NAACL 2021 Denis Newman-Griffis, Venkatesh Sivaraman, Adam Perer, Eric Fosler-Lussier, Harry Hochheiser

Embeddings of words and concepts capture syntactic and semantic regularities of language; however, they have seen limited use as tools to study characteristics of different corpora and how they relate to one another.

Ablate, Variate, and Contemplate: Visual Analytics for Discovering Neural Architectures

1 code implementation30 Jul 2019 Dylan Cashman, Adam Perer, Remco Chang, Hendrik Strobelt

In this paper, we present Rapid Exploration of Model Architectures and Parameters, or REMAP, a visual analytics tool that allows a model builder to discover a deep learning model quickly via exploration and rapid experimentation of neural network architectures.

Regularizing Black-box Models for Improved Interpretability

1 code implementation NeurIPS 2020 Gregory Plumb, Maruan Al-Shedivat, Angel Alexander Cabrera, Adam Perer, Eric Xing, Ameet Talwalkar

Most of the work on interpretable machine learning has focused on designing either inherently interpretable models, which typically trade-off accuracy for interpretability, or post-hoc explanation systems, whose explanation quality can be unpredictable.

BIG-bench Machine Learning Interpretable Machine Learning

Debugging Sequence-to-Sequence Models with Seq2Seq-Vis

no code implementations WS 2018 Hendrik Strobelt, Sebastian Gehrmann, Michael Behrisch, Adam Perer, Hanspeter Pfister, Alex Rush, er

Neural attention-based sequence-to-sequence models (seq2seq) (Sutskever et al., 2014; Bahdanau et al., 2014) have proven to be accurate and robust for many sequence prediction tasks.


Seq2Seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models

1 code implementation25 Apr 2018 Hendrik Strobelt, Sebastian Gehrmann, Michael Behrisch, Adam Perer, Hanspeter Pfister, Alexander M. Rush

In this work, we present a visual analysis tool that allows interaction with a trained sequence-to-sequence model through each stage of the translation process.


Using Visual Analytics to Interpret Predictive Machine Learning Models

no code implementations17 Jun 2016 Josua Krause, Adam Perer, Enrico Bertini

It is commonly believed that increasing the interpretability of a machine learning model may decrease its predictive power.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.