Deep learning methods, in particular convolutional neural networks, have emerged as a powerful tool in medical image computing tasks.
This position paper offers a framework to think about how to better involve human influence in algorithmic decision-making of contentious public policy issues.
AI-based decision support tools (ADS) are increasingly used to augment human decision-making in high-stakes, social contexts.
Modern machine learning techniques commonly rely on complex, high-dimensional embedding representations to capture underlying structure in the data and improve performance.
Explainable AI (XAI) is a promising means of supporting human-AI collaborations for high-stakes visual detection tasks, such as damage detection tasks from satellite imageries, as fully-automated approaches are unlikely to be perfectly safe and reliable.
AI systems can fail to learn important behaviors, leading to real-world issues like safety concerns and biases.
Embeddings of words and concepts capture syntactic and semantic regularities of language; however, they have seen limited use as tools to study characteristics of different corpora and how they relate to one another.
In this paper, we present Rapid Exploration of Model Architectures and Parameters, or REMAP, a visual analytics tool that allows a model builder to discover a deep learning model quickly via exploration and rapid experimentation of neural network architectures.
Most of the work on interpretable machine learning has focused on designing either inherently interpretable models, which typically trade-off accuracy for interpretability, or post-hoc explanation systems, whose explanation quality can be unpredictable.
Neural attention-based sequence-to-sequence models (seq2seq) (Sutskever et al., 2014; Bahdanau et al., 2014) have proven to be accurate and robust for many sequence prediction tasks.
In this work, we present a visual analysis tool that allows interaction with a trained sequence-to-sequence model through each stage of the translation process.
It is commonly believed that increasing the interpretability of a machine learning model may decrease its predictive power.