no code implementations • 17 Jun 2016 • Josua Krause, Adam Perer, Enrico Bertini
It is commonly believed that increasing the interpretability of a machine learning model may decrease its predictive power.
1 code implementation • 25 Apr 2018 • Hendrik Strobelt, Sebastian Gehrmann, Michael Behrisch, Adam Perer, Hanspeter Pfister, Alexander M. Rush
In this work, we present a visual analysis tool that allows interaction with a trained sequence-to-sequence model through each stage of the translation process.
no code implementations • WS 2018 • Hendrik Strobelt, Sebastian Gehrmann, Michael Behrisch, Adam Perer, Hanspeter Pfister, Alex Rush, er
Neural attention-based sequence-to-sequence models (seq2seq) (Sutskever et al., 2014; Bahdanau et al., 2014) have proven to be accurate and robust for many sequence prediction tasks.
1 code implementation • NeurIPS 2020 • Gregory Plumb, Maruan Al-Shedivat, Angel Alexander Cabrera, Adam Perer, Eric Xing, Ameet Talwalkar
Most of the work on interpretable machine learning has focused on designing either inherently interpretable models, which typically trade-off accuracy for interpretability, or post-hoc explanation systems, whose explanation quality can be unpredictable.
1 code implementation • 30 Jul 2019 • Dylan Cashman, Adam Perer, Remco Chang, Hendrik Strobelt
In this paper, we present Rapid Exploration of Model Architectures and Parameters, or REMAP, a visual analytics tool that allows a model builder to discover a deep learning model quickly via exploration and rapid experimentation of neural network architectures.
1 code implementation • NAACL 2021 • Denis Newman-Griffis, Venkatesh Sivaraman, Adam Perer, Eric Fosler-Lussier, Harry Hochheiser
Embeddings of words and concepts capture syntactic and semantic regularities of language; however, they have seen limited use as tools to study characteristics of different corpora and how they relate to one another.
no code implementations • 23 Sep 2021 • Ángel Alexander Cabrera, Abraham J. Druck, Jason I. Hong, Adam Perer
AI systems can fail to learn important behaviors, leading to real-world issues like safety concerns and biases.
no code implementations • 4 Nov 2021 • Donghoon Shin, Sachin Grover, Kenneth Holstein, Adam Perer
Explainable AI (XAI) is a promising means of supporting human-AI collaborations for high-stakes visual detection tasks, such as damage detection tasks from satellite imageries, as fully-automated approaches are unlikely to be perfectly safe and reliable.
1 code implementation • 5 Feb 2022 • Venkatesh Sivaraman, Yiwei Wu, Adam Perer
Modern machine learning techniques commonly rely on complex, high-dimensional embedding representations to capture underlying structure in the data and improve performance.
no code implementations • 5 Apr 2022 • Anna Kawakami, Venkatesh Sivaraman, Hao-Fei Cheng, Logan Stapleton, Yanghuidi Cheng, Diana Qing, Adam Perer, Zhiwei Steven Wu, Haiyi Zhu, Kenneth Holstein
AI-based decision support tools (ADS) are increasingly used to augment human decision-making in high-stakes, social contexts.
no code implementations • 22 Apr 2022 • Hong Shen, Ángel Alexander Cabrera, Adam Perer, Jason Hong
This position paper offers a framework to think about how to better involve human influence in algorithmic decision-making of contentious public policy issues.
no code implementations • 30 Sep 2022 • Yuzhe Lu, Adam Perer
Deep learning methods, in particular convolutional neural networks, have emerged as a powerful tool in medical image computing tasks.
no code implementations • 6 Jan 2023 • Ángel Alexander Cabrera, Adam Perer, Jason I. Hong
People work with AI systems to improve their decision making, but often under- or over-rely on AI predictions and perform worse than they would have unassisted.
no code implementations • 25 Jul 2023 • Katelyn Morrison, Philipp Spitzer, Violet Turri, Michelle Feng, Niklas Kühl, Adam Perer
Our findings reveal the influence of imperfect XAI and humans' level of expertise on their reliance on AI and human-AI team performance.
2 code implementations • 10 Apr 2024 • Unnseo Park, Venkatesh Sivaraman, Adam Perer
Reinforcement learning (RL) is a promising approach to generate treatment policies for sepsis patients in intensive care.