Search Results for author: Chih-Kuan Yeh

Found 14 papers, 8 papers with code

Minimizing FLOPs to Learn Efficient Sparse Representations

1 code implementation ICLR 2020 Biswajit Paria, Chih-Kuan Yeh, Ian E. H. Yen, Ning Xu, Pradeep Ravikumar, Barnabás Póczos

Deep representation learning has become one of the most widely adopted approaches for visual search, recommendation, and identification.

Quantization Representation Learning

On the (In)fidelity and Sensitivity of Explanations

1 code implementation NeurIPS 2019 Chih-Kuan Yeh, Cheng-Yu Hsieh, Arun Suggala, David I. Inouye, Pradeep K. Ravikumar

We analyze optimal explanations with respect to both these measures, and while the optimal explanation for sensitivity is a vacuous constant explanation, the optimal explanation for infidelity is a novel combination of two popular explanation methods.

On Completeness-aware Concept-Based Explanations in Deep Neural Networks

1 code implementation NeurIPS 2020 Chih-Kuan Yeh, Been Kim, Sercan O. Arik, Chun-Liang Li, Tomas Pfister, Pradeep Ravikumar

Next, we propose a concept discovery method that aims to infer a complete set of concepts that are additionally encouraged to be interpretable, which addresses the limitations of existing methods on concept explanations.

DEEP-TRIM: REVISITING L1 REGULARIZATION FOR CONNECTION PRUNING OF DEEP NETWORK

no code implementations ICLR 2019 Chih-Kuan Yeh, Ian E. H. Yen, Hong-You Chen, Chun-Pei Yang, Shou-De Lin, Pradeep Ravikumar

State-of-the-art deep neural networks (DNNs) typically have tens of millions of parameters, which might not fit into the upper levels of the memory hierarchy, thus increasing the inference time and energy consumption significantly, and prohibiting their use on edge devices such as mobile phones.

On the (In)fidelity and Sensitivity for Explanations

1 code implementation27 Jan 2019 Chih-Kuan Yeh, Cheng-Yu Hsieh, Arun Sai Suggala, David I. Inouye, Pradeep Ravikumar

We analyze optimal explanations with respect to both these measures, and while the optimal explanation for sensitivity is a vacuous constant explanation, the optimal explanation for infidelity is a novel combination of two popular explanation methods.

Unsupervised Speech Recognition via Segmental Empirical Output Distribution Matching

no code implementations ICLR 2019 Chih-Kuan Yeh, Jianshu Chen, Chengzhu Yu, Dong Yu

We consider the problem of training speech recognition systems without using any labeled data, under the assumption that the learner can only access to the input utterances and a phoneme language model estimated from a non-overlapping corpus.

Language Modelling Speech Recognition +1

Representer Point Selection for Explaining Deep Neural Networks

1 code implementation NeurIPS 2018 Chih-Kuan Yeh, Joon Sik Kim, Ian E. H. Yen, Pradeep Ravikumar

We propose to explain the predictions of a deep neural network, by pointing to the set of what we call representer points in the training set, for a given test point prediction.

Deep Generative Models for Weakly-Supervised Multi-Label Classification

no code implementations ECCV 2018 Hong-Min Chu, Chih-Kuan Yeh, Yu-Chiang Frank Wang

In order to train learning models for multi-label classification (MLC), it is typically desirable to have a large amount of fully annotated multi-label data.

General Classification Multi-Label Classification

Multi-Label Zero-Shot Learning with Structured Knowledge Graphs

1 code implementation CVPR 2018 Chung-Wei Lee, Wei Fang, Chih-Kuan Yeh, Yu-Chiang Frank Wang

In this paper, we propose a novel deep learning architecture for multi-label zero-shot learning (ML-ZSL), which is able to predict multiple unseen class labels for each input instance.

General Classification Knowledge Graphs +3

Order-Free RNN with Visual Attention for Multi-Label Classification

1 code implementation18 Jul 2017 Shang-Fu Chen, Yi-Chen Chen, Chih-Kuan Yeh, Yu-Chiang Frank Wang

In this paper, we propose the joint learning attention and recurrent neural network (RNN) models for multi-label classification.

General Classification Image Captioning +1

Learning Deep Latent Spaces for Multi-Label Classification

1 code implementation3 Jul 2017 Chih-Kuan Yeh, Wei-Chieh Wu, Wei-Jen Ko, Yu-Chiang Frank Wang

Multi-label classification is a practical yet challenging task in machine learning related fields, since it requires the prediction of more than one label category for each input instance.

General Classification Multi-Label Classification

Generative-Discriminative Variational Model for Visual Recognition

no code implementations7 Jun 2017 Chih-Kuan Yeh, Yao-Hung Hubert Tsai, Yu-Chiang Frank Wang

In other words, our GDVM casts the supervised learning task as a generative learning process, with data discrimination to be jointly exploited for improved classification.

General Classification Multi-class Classification +2

Automatic Bridge Bidding Using Deep Reinforcement Learning

no code implementations12 Jul 2016 Chih-Kuan Yeh, Hsuan-Tien Lin

Existing artificial intelligence systems for bridge bidding rely on and are thus restricted by human-designed bidding systems or features.

Decision Making

Cannot find the paper you are looking for? You can Submit a new open access paper.