Despite being a critical communication skill, grasping humor is challenging -- a successful use of humor requires a mixture of both engaging content build-up and an appropriate vocal delivery (e. g., pause).
Two case studies and interviews with domain experts demonstrate the effectiveness of GNNLens in facilitating the understanding of GNN models and their errors.
With machine learning models being increasingly applied to various decision-making scenarios, people have spent growing efforts to make machine learning models more transparent and explainable.
To guide the design of ATMSeer, we derive a workflow of using AutoML based on interviews with machine learning experts.
With the growing adoption of machine learning techniques, there is a surge of research interest towards making machine learning systems more transparent and interpretable.
We propose a technique to explain the function of individual hidden state units based on their expected response to input texts.