Search Results for author: Chih-Kuan Yeh

Found 20 papers, 10 papers with code

Concept Gradient: Concept-based Interpretation Without Linear Assumption

no code implementations31 Aug 2022 Andrew Bai, Chih-Kuan Yeh, Pradeep Ravikumar, Neil Y. C. Lin, Cho-Jui Hsieh

We showed that for a general (potentially non-linear) concept, we can mathematically evaluate how a small change of concept affecting the model's prediction, which leads to an extension of gradient-based interpretation to the concept space.

Faith-Shap: The Faithful Shapley Interaction Index

1 code implementation2 Mar 2022 Che-Ping Tsai, Chih-Kuan Yeh, Pradeep Ravikumar

We show that by additionally requiring the faithful interaction indices to satisfy interaction-extensions of the standard individual Shapley axioms (dummy, symmetry, linearity, and efficiency), we obtain a unique Faithful Shapley Interaction index, which we denote Faith-Shap, as a natural generalization of the Shapley value to interactions.

Human-Centered Concept Explanations for Neural Networks

no code implementations25 Feb 2022 Chih-Kuan Yeh, Been Kim, Pradeep Ravikumar

We start by introducing concept explanations including the class of Concept Activation Vectors (CAV) which characterize concepts using vectors in appropriate spaces of neural activations, and discuss different properties of useful concepts, and approaches to measure the usefulness of concept vectors.

First is Better Than Last for Language Data Influence

1 code implementation24 Feb 2022 Chih-Kuan Yeh, Ankur Taly, Mukund Sundararajan, Frederick Liu, Pradeep Ravikumar

However, we observe that since the activation connected to the last layer of weights contains "shared logic", the data influenced calculated via the last layer weights prone to a ``cancellation effect'', where the data influence of different examples have large magnitude that contradicts each other.

Threading the Needle of On and Off-Manifold Value Functions for Shapley Explanations

no code implementations24 Feb 2022 Chih-Kuan Yeh, Kuan-Yun Lee, Frederick Liu, Pradeep Ravikumar

We formalize the desiderata of value functions that respect both the model and the data manifold in a set of axioms and are robust to perturbation on off-manifold regions, and show that there exists a unique value function that satisfies these axioms, which we term the Joint Baseline value function, and the resulting Shapley value the Joint Baseline Shapley (JBshap), and validate the effectiveness of JBshap in experiments.

Feature Importance

Minimizing FLOPs to Learn Efficient Sparse Representations

1 code implementation ICLR 2020 Biswajit Paria, Chih-Kuan Yeh, Ian E. H. Yen, Ning Xu, Pradeep Ravikumar, Barnabás Póczos

Deep representation learning has become one of the most widely adopted approaches for visual search, recommendation, and identification.

Quantization Representation Learning +1

On the (In)fidelity and Sensitivity of Explanations

1 code implementation NeurIPS 2019 Chih-Kuan Yeh, Cheng-Yu Hsieh, Arun Suggala, David I. Inouye, Pradeep K. Ravikumar

We analyze optimal explanations with respect to both these measures, and while the optimal explanation for sensitivity is a vacuous constant explanation, the optimal explanation for infidelity is a novel combination of two popular explanation methods.

On Completeness-aware Concept-Based Explanations in Deep Neural Networks

2 code implementations NeurIPS 2020 Chih-Kuan Yeh, Been Kim, Sercan O. Arik, Chun-Liang Li, Tomas Pfister, Pradeep Ravikumar

Next, we propose a concept discovery method that aims to infer a complete set of concepts that are additionally encouraged to be interpretable, which addresses the limitations of existing methods on concept explanations.

On Concept-Based Explanations in Deep Neural Networks

no code implementations25 Sep 2019 Chih-Kuan Yeh, Been Kim, Sercan Arik, Chun-Liang Li, Pradeep Ravikumar, Tomas Pfister

Next, we propose a concept discovery method that considers two additional constraints to encourage the interpretability of the discovered concepts.

DEEP-TRIM: REVISITING L1 REGULARIZATION FOR CONNECTION PRUNING OF DEEP NETWORK

no code implementations ICLR 2019 Chih-Kuan Yeh, Ian E. H. Yen, Hong-You Chen, Chun-Pei Yang, Shou-De Lin, Pradeep Ravikumar

State-of-the-art deep neural networks (DNNs) typically have tens of millions of parameters, which might not fit into the upper levels of the memory hierarchy, thus increasing the inference time and energy consumption significantly, and prohibiting their use on edge devices such as mobile phones.

On the (In)fidelity and Sensitivity for Explanations

2 code implementations27 Jan 2019 Chih-Kuan Yeh, Cheng-Yu Hsieh, Arun Sai Suggala, David I. Inouye, Pradeep Ravikumar

We analyze optimal explanations with respect to both these measures, and while the optimal explanation for sensitivity is a vacuous constant explanation, the optimal explanation for infidelity is a novel combination of two popular explanation methods.

Unsupervised Speech Recognition via Segmental Empirical Output Distribution Matching

no code implementations ICLR 2019 Chih-Kuan Yeh, Jianshu Chen, Chengzhu Yu, Dong Yu

We consider the problem of training speech recognition systems without using any labeled data, under the assumption that the learner can only access to the input utterances and a phoneme language model estimated from a non-overlapping corpus.

Language Modelling speech-recognition +2

Representer Point Selection for Explaining Deep Neural Networks

1 code implementation NeurIPS 2018 Chih-Kuan Yeh, Joon Sik Kim, Ian E. H. Yen, Pradeep Ravikumar

We propose to explain the predictions of a deep neural network, by pointing to the set of what we call representer points in the training set, for a given test point prediction.

Deep Generative Models for Weakly-Supervised Multi-Label Classification

no code implementations ECCV 2018 Hong-Min Chu, Chih-Kuan Yeh, Yu-Chiang Frank Wang

In order to train learning models for multi-label classification (MLC), it is typically desirable to have a large amount of fully annotated multi-label data.

Classification General Classification +1

Multi-Label Zero-Shot Learning with Structured Knowledge Graphs

1 code implementation CVPR 2018 Chung-Wei Lee, Wei Fang, Chih-Kuan Yeh, Yu-Chiang Frank Wang

In this paper, we propose a novel deep learning architecture for multi-label zero-shot learning (ML-ZSL), which is able to predict multiple unseen class labels for each input instance.

General Classification Knowledge Graphs +3

Order-Free RNN with Visual Attention for Multi-Label Classification

1 code implementation18 Jul 2017 Shang-Fu Chen, Yi-Chen Chen, Chih-Kuan Yeh, Yu-Chiang Frank Wang

In this paper, we propose the joint learning attention and recurrent neural network (RNN) models for multi-label classification.

Classification General Classification +2

Learning Deep Latent Spaces for Multi-Label Classification

1 code implementation3 Jul 2017 Chih-Kuan Yeh, Wei-Chieh Wu, Wei-Jen Ko, Yu-Chiang Frank Wang

Multi-label classification is a practical yet challenging task in machine learning related fields, since it requires the prediction of more than one label category for each input instance.

Classification General Classification +1

Generative-Discriminative Variational Model for Visual Recognition

no code implementations7 Jun 2017 Chih-Kuan Yeh, Yao-Hung Hubert Tsai, Yu-Chiang Frank Wang

In other words, our GDVM casts the supervised learning task as a generative learning process, with data discrimination to be jointly exploited for improved classification.

Classification General Classification +3

Automatic Bridge Bidding Using Deep Reinforcement Learning

no code implementations12 Jul 2016 Chih-Kuan Yeh, Hsuan-Tien Lin

Existing artificial intelligence systems for bridge bidding rely on and are thus restricted by human-designed bidding systems or features.

Decision Making reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.