Search Results for author: Mengnan Du

Found 26 papers, 3 papers with code

What do Compressed Large Language Models Forget? Robustness Challenges in Model Compression

no code implementations16 Oct 2021 Mengnan Du, Subhabrata Mukherjee, Yu Cheng, Milad Shokouhi, Xia Hu, Ahmed Hassan Awadallah

Recent works have focused on compressing pre-trained language models (PLMs) like BERT where the major focus has been to improve the compressed model performance for downstream tasks.

Knowledge Distillation Language understanding +2

Fairness via Representation Neutralization

no code implementations NeurIPS 2021 Mengnan Du, Subhabrata Mukherjee, Guanchu Wang, Ruixiang Tang, Ahmed Hassan Awadallah, Xia Hu

This process not only requires a lot of instance-level annotations for sensitive attributes, it also does not guarantee that all fairness sensitive information has been removed from the encoder.

Classification Fairness

A General Taylor Framework for Unifying and Revisiting Attribution Methods

no code implementations28 May 2021 Huiqi Deng, Na Zou, Mengnan Du, Weifu Chen, Guocan Feng, Xia Hu

However, the attribution problem has not been well-defined, which lacks a unified guideline to the contribution assignment process.

Decision Making

Learning Disentangled Representations for Time Series

no code implementations17 May 2021 Yuening Li, Zhengzhang Chen, Daochen Zha, Mengnan Du, Denghui Zhang, Haifeng Chen, Xia Hu

Motivated by the success of disentangled representation learning in computer vision, we study the possibility of learning semantic-rich time-series representations, which remains unexplored due to three main challenges: 1) sequential data structure introduces complex temporal correlations and makes the latent representations hard to interpret, 2) sequential models suffer from KL vanishing problem, and 3) interpretable semantic concepts for time-series often rely on multiple factors instead of individuals.

Representation Learning Time Series +1

Mutual Information Preserving Back-propagation: Learn to Invert for Faithful Attribution

no code implementations14 Apr 2021 Huiqi Deng, Na Zou, Weifu Chen, Guocan Feng, Mengnan Du, Xia Hu

The basic idea is to learn a source signal by back-propagation such that the mutual information between input and output should be as much as possible preserved in the mutual information between input and the source signal.

Decision Making

Generative Counterfactuals for Neural Networks via Attribute-Informed Perturbation

no code implementations18 Jan 2021 Fan Yang, Ninghao Liu, Mengnan Du, Xia Hu

With the wide use of deep neural networks (DNN), model interpretability has become a critical concern, since explainable decisions are preferred in high-stake scenarios.

Deep Serial Number: Computational Watermarking for DNN Intellectual Property Protection

no code implementations17 Nov 2020 Ruixiang Tang, Mengnan Du, Xia Hu

In this paper, we introduce DSN (Deep Serial Number), a new watermarking approach that can prevent the stolen model from being deployed by unauthorized parties.

Knowledge Distillation

A Unified Taylor Framework for Revisiting Attribution Methods

no code implementations21 Aug 2020 Huiqi Deng, Na Zou, Mengnan Du, Weifu Chen, Guocan Feng, Xia Hu

Attribution methods have been developed to understand the decision-making process of machine learning models, especially deep neural networks, by assigning importance scores to individual features.

Decision Making

Mitigating Gender Bias in Captioning Systems

1 code implementation15 Jun 2020 Ruixiang Tang, Mengnan Du, Yuening Li, Zirui Liu, Na Zou, Xia Hu

Image captioning has made substantial progress with huge supporting image collections sourced from the web.

Gender Prediction Image Captioning

An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks

1 code implementation15 Jun 2020 Ruixiang Tang, Mengnan Du, Ninghao Liu, Fan Yang, Xia Hu

In this paper, we investigate a specific security problem called trojan attack, which aims to attack deployed DNN systems relying on the hidden trigger patterns inserted by malicious hackers.

Adversarial Attacks and Defenses: An Interpretation Perspective

no code implementations23 Apr 2020 Ninghao Liu, Mengnan Du, Ruocheng Guo, Huan Liu, Xia Hu

In this paper, we review recent work on adversarial attacks and defenses, particularly from the perspective of machine learning interpretation.

Adversarial Attack Adversarial Defense +1

Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks

7 code implementations3 Oct 2019 Haofan Wang, Zifan Wang, Mengnan Du, Fan Yang, Zijian Zhang, Sirui Ding, Piotr Mardziel, Xia Hu

Recently, increasing attention has been drawn to the internal mechanisms of convolutional neural networks, and the reason why the network makes specific decisions.

Adversarial Attack Decision Making +1

Sub-Architecture Ensemble Pruning in Neural Architecture Search

no code implementations1 Oct 2019 Yijun Bian, Qingquan Song, Mengnan Du, Jun Yao, Huanhuan Chen, Xia Hu

Neural architecture search (NAS) is gaining more and more attention in recent years due to its flexibility and remarkable capability to reduce the burden of neural network design.

Ensemble Learning Ensemble Pruning +1

Distribution-Guided Local Explanation for Black-Box Classifiers

no code implementations25 Sep 2019 Weijie Fu, Meng Wang, Mengnan Du, Ninghao Liu, Shijie Hao, Xia Hu

Existing local explanation methods provide an explanation for each decision of black-box classifiers, in the form of relevance scores of features according to their contributions.

Towards Generalizable Deepfake Detection with Locality-aware AutoEncoder

no code implementations13 Sep 2019 Mengnan Du, Shiva Pentyala, Yuening Li, Xia Hu

The analysis further shows that LAE outperforms the state-of-the-arts by 6. 52%, 12. 03%, and 3. 08% respectively on three deepfake detection tasks in terms of generalization accuracy on previously unseen manipulations.

Active Learning DeepFake Detection +2

Fairness in Deep Learning: A Computational Perspective

no code implementations23 Aug 2019 Mengnan Du, Fan Yang, Na Zou, Xia Hu

Deep learning is increasingly being used in high-stake decision making applications that affect individual lives.

Decision Making Fairness

Learning Credible Deep Neural Networks with Rationale Regularization

no code implementations13 Aug 2019 Mengnan Du, Ninghao Liu, Fan Yang, Xia Hu

Recent explainability related studies have shown that state-of-the-art DNNs do not always adopt correct evidences to make decisions.

Text Classification

Deep Structured Cross-Modal Anomaly Detection

no code implementations11 Aug 2019 Yuening Li, Ninghao Liu, Jundong Li, Mengnan Du, Xia Hu

To this end, we propose a novel deep structured anomaly detection framework to identify the cross-modal anomalies embedded in the data.

Anomaly Detection

SpecAE: Spectral AutoEncoder for Anomaly Detection in Attributed Networks

no code implementations11 Aug 2019 Yuening Li, Xiao Huang, Jundong Li, Mengnan Du, Na Zou

SpecAE leverages Laplacian sharpening to amplify the distances between representations of anomalies and the ones of the majority.

Anomaly Detection Density Estimation

Evaluating Explanation Without Ground Truth in Interpretable Machine Learning

no code implementations16 Jul 2019 Fan Yang, Mengnan Du, Xia Hu

Interpretable Machine Learning (IML) has become increasingly important in many real-world applications, such as autonomous cars and medical diagnosis, where explanations are significantly preferred to help people better understand how machine learning systems work and further enhance their trust towards systems.

Interpretable Machine Learning Medical Diagnosis

XFake: Explainable Fake News Detector with Visualizations

no code implementations8 Jul 2019 Fan Yang, Shiva K. Pentyala, Sina Mohseni, Mengnan Du, Hao Yuan, Rhema Linder, Eric D. Ragan, Shuiwang Ji, Xia Hu

In this demo paper, we present the XFake system, an explainable fake news detector that assists end-users to identify news credibility.

On Attribution of Recurrent Neural Network Predictions via Additive Decomposition

no code implementations27 Mar 2019 Mengnan Du, Ninghao Liu, Fan Yang, Shuiwang Ji, Xia Hu

REAT decomposes the final prediction of a RNN into additive contribution of each word in the input text.

Decision Making

Techniques for Interpretable Machine Learning

no code implementations31 Jul 2018 Mengnan Du, Ninghao Liu, Xia Hu

Interpretable machine learning tackles the important problem that humans cannot understand the behaviors of complex machine learning models and how these models arrive at a particular decision.

Interpretable Machine Learning

Towards Explanation of DNN-based Prediction with Guided Feature Inversion

no code implementations19 Mar 2018 Mengnan Du, Ninghao Liu, Qingquan Song, Xia Hu

While deep neural networks (DNN) have become an effective computational tool, the prediction results are often criticized by the lack of interpretability, which is essential in many real-world applications such as health informatics.

Decision Making

Cannot find the paper you are looking for? You can Submit a new open access paper.