Search Results for author: Yao Ming

Found 7 papers, 4 papers with code

DeHumor: Visual Analytics for Decomposing Humor

no code implementations18 Jul 2021 Xingbo Wang, Yao Ming, Tongshuang Wu, Haipeng Zeng, Yong Wang, Huamin Qu

Despite being a critical communication skill, grasping humor is challenging -- a successful use of humor requires a mixture of both engaging content build-up and an appropriate vocal delivery (e. g., pause).

GNNLens: A Visual Analytics Approach for Prediction Error Diagnosis of Graph Neural Networks

no code implementations22 Nov 2020 Zhihua Jin, Yong Wang, Qianwen Wang, Yao Ming, Tengfei Ma, Huamin Qu

Two case studies and interviews with domain experts demonstrate the effectiveness of GNNLens in facilitating the understanding of GNN models and their errors.

Node Classification

DECE: Decision Explorer with Counterfactual Explanations for Machine Learning Models

no code implementations19 Aug 2020 Furui Cheng, Yao Ming, Huamin Qu

With machine learning models being increasingly applied to various decision-making scenarios, people have spent growing efforts to make machine learning models more transparent and explainable.

Counterfactual Explanation Decision Making

Interpretable and Steerable Sequence Learning via Prototypes

2 code implementations23 Jul 2019 Yao Ming, Panpan Xu, Huamin Qu, Liu Ren

The prediction is obtained by comparing the inputs to a few prototypes, which are exemplar cases in the problem domain.

Sentiment Analysis

ATMSeer: Increasing Transparency and Controllability in Automated Machine Learning

1 code implementation13 Feb 2019 Qianwen Wang, Yao Ming, Zhihua Jin, Qiaomu Shen, Dongyu Liu, Micah J. Smith, Kalyan Veeramachaneni, Huamin Qu

To guide the design of ATMSeer, we derive a workflow of using AutoML based on interviews with machine learning experts.

AutoML

RuleMatrix: Visualizing and Understanding Classifiers with Rules

1 code implementation17 Jul 2018 Yao Ming, Huamin Qu, Enrico Bertini

With the growing adoption of machine learning techniques, there is a surge of research interest towards making machine learning systems more transparent and interpretable.

Understanding Hidden Memories of Recurrent Neural Networks

1 code implementation30 Oct 2017 Yao Ming, Shaozu Cao, Ruixiang Zhang, Zhen Li, Yuanzhe Chen, Yangqiu Song, Huamin Qu

We propose a technique to explain the function of individual hidden state units based on their expected response to input texts.

Cannot find the paper you are looking for? You can Submit a new open access paper.