Interfaces for machine learning (ML), information and visualizations about models or data, can help practitioners build robust and responsible ML systems.
The confusion matrix, a ubiquitous visualization for helping people evaluate machine learning models, is a tabular layout that compares predicted class labels against actual class labels over all data instances.
Through a large-scale human evaluation, we demonstrate that our technique discovers neuron groups that represent coherent, human-meaningful concepts.
Deep learning's great success motivates many practitioners and students to learn about this exciting technology.
Deep neural networks (DNNs) are increasingly powering high-stakes applications such as autonomous cars and healthcare; however, DNNs are often treated as "black boxes" in such applications.
The success of deep learning solving previously-thought hard problems has inspired many non-experts to learn and understand this exciting technology.
In recent years, machine learning (ML) has gained significant popularity in the field of chemical informatics and electronic structure theory.
As deep neural networks are increasingly used in solving high-stake problems, there is a pressing need to understand their internal decision mechanisms.
We present FairVis, a mixed-initiative visual analytics system that integrates a novel subgroup discovery technique for users to audit the fairness of machine learning models.
We present an interactive system enabling users to manipulate images to explore the robustness and sensitivity of deep learning image classifiers.
The rapidly growing body of research in adversarial machine learning has demonstrated that deep neural networks (DNNs) are highly vulnerable to adversarially generated images.
We present a survey of the role of visual analytics in deep learning research, which highlights its short yet impactful history and thoroughly summarizes the state-of-the-art using a human-centered interrogative framework, focusing on the Five W's and How (Why, Who, What, How, When, and Where).
We validate these models in two ways: quantitatively, by comparing our model's grid cell estimates aggregated at a county-level to several US Census county-level population projections, and qualitatively, by directly interpreting the model's predictions in terms of the satellite image inputs.
Deep neural networks (DNNs) have achieved great success in solving a variety of machine learning (ML) problems, especially in the domain of image recognition.