Search Results for author: Michael Tsang

Found 11 papers, 1 papers with code

DHEN: A Deep and Hierarchical Ensemble Network for Large-Scale Click-Through Rate Prediction

no code implementations11 Mar 2022 Buyun Zhang, Liang Luo, Xi Liu, Jay Li, Zeliang Chen, Weilin Zhang, Xiaohan Wei, Yuchen Hao, Michael Tsang, Wenjun Wang, Yang Liu, Huayu Li, Yasmine Badr, Jongsoo Park, Jiyan Yang, Dheevatsa Mudigere, Ellie Wen

To overcome the challenge brought by DHEN's deeper and multi-layer structure in training, we propose a novel co-designed training system that can further improve the training efficiency of DHEN.

Click-Through Rate Prediction

Interpretable Artificial Intelligence through the Lens of Feature Interaction

no code implementations1 Mar 2021 Michael Tsang, James Enouen, Yan Liu

Interpretation of deep learning models is a very challenging problem because of their large number of parameters, complex connections between nodes, and unintelligible feature representations.

Fairness

Interpretable and Trustworthy Deepfake Detection via Dynamic Prototypes

no code implementations28 Jun 2020 Loc Trinh, Michael Tsang, Sirisha Rambhatla, Yan Liu

In this paper we propose a novel human-centered approach for detecting forgery in face images, using dynamic prototypes as a form of visual explanations.

DeepFake Detection Face Swapping

Feature Interaction Interpretability: A Case for Explaining Ad-Recommendation Systems via Neural Interaction Detection

1 code implementation ICLR 2020 Michael Tsang, Dehua Cheng, Hanpeng Liu, Xue Feng, Eric Zhou, Yan Liu

Recommendation is a prevalent application of machine learning that affects many users; therefore, it is important for recommender models to be accurate and interpretable.

Image Classification Recommendation Systems

Extracting Interpretable Concept-Based Decision Trees from CNNs

no code implementations11 Jun 2019 Conner Chyung, Michael Tsang, Yan Liu

In an attempt to gather a deeper understanding of how convolutional neural networks (CNNs) reason about human-understandable concepts, we present a method to infer labeled concept data from hidden layer activations and interpret the concepts through a shallow decision tree.

Can I trust you more? Model-Agnostic Hierarchical Explanations

no code implementations ICLR 2019 Michael Tsang, Youbang Sun, Dongxu Ren, Yan Liu

Interactions such as double negation in sentences and scene interactions in images are common forms of complex dependencies captured by state-of-the-art machine learning models.

Neural Interaction Transparency (NIT): Disentangling Learned Interactions for Improved Interpretability

no code implementations NeurIPS 2018 Michael Tsang, Hanpeng Liu, Sanjay Purushotham, Pavankumar Murali, Yan Liu

Neural networks are known to model statistical interactions, but they entangle the interactions at intermediate hidden layers for shared representation learning.

Additive models Representation Learning

Detecting Statistical Interactions from Neural Network Weights

no code implementations ICLR 2018 Michael Tsang, Dehua Cheng, Yan Liu

Interpreting neural networks is a crucial and challenging task in machine learning.

Cannot find the paper you are looking for? You can Submit a new open access paper.