Search Results for author: HANLIN ZHANG

Found 10 papers, 4 papers with code

Improving the Robustness and Generalization of Deep Neural Network with Confidence Threshold Reduction

no code implementations2 Jun 2022 Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao

The empirical and theoretical analysis demonstrates that the MDL loss improves the robustness and generalization of the model simultaneously for natural training.

FACM: Correct the Output of Deep Neural Network with Middle Layers Features against Adversarial Samples

no code implementations2 Jun 2022 Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao

In the strong adversarial attacks against deep neural network (DNN), the output of DNN will be misclassified if and only if the last feature layer of the DNN is completely destroyed by adversarial samples, while our studies found that the middle feature layers of the DNN can still extract the effective features of the original normal category in these adversarial attacks.

Enhancing the Transferability of Adversarial Examples via a Few Queries

no code implementations19 May 2022 Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao

Due to the vulnerability of deep neural networks, the black-box attack has drawn great attention from the community.

Exploring Transformer Backbones for Heterogeneous Treatment Effect Estimation

1 code implementation2 Feb 2022 Yi-Fan Zhang, HANLIN ZHANG, Zachary C. Lipton, Li Erran Li, Eric P. Xing

Neural networks (NNs) are often leveraged to represent structural similarities of potential outcomes (POs) of different treatment groups to obtain better finite-sample estimates of treatment effects.

POS Selection bias

Stochastic Neural Networks with Infinite Width are Deterministic

no code implementations30 Jan 2022 Liu Ziyin, HANLIN ZHANG, Xiangming Meng, Yuting Lu, Eric Xing, Masahito Ueda

This work theoretically studies stochastic neural networks, a main type of neural network in use.

Towards Principled Disentanglement for Domain Generalization

1 code implementation CVPR 2022 HANLIN ZHANG, Yi-Fan Zhang, Weiyang Liu, Adrian Weller, Bernhard Schölkopf, Eric P. Xing

To tackle this challenge, we first formalize the OOD generalization problem as constrained optimization, called Disentanglement-constrained Domain Generalization (DDG).

Disentanglement Domain Generalization

Toward Learning Human-aligned Cross-domain Robust Models by Countering Misaligned Features

1 code implementation5 Nov 2021 Haohan Wang, Zeyi Huang, HANLIN ZHANG, Yong Jae Lee, Eric Xing

Machine learning has demonstrated remarkable prediction accuracy over i. i. d data, but the accuracy often drops when tested with data from another distribution.

BIG-bench Machine Learning

Towards Interpretable Natural Language Understanding with Explanations as Latent Variables

1 code implementation NeurIPS 2020 Wangchunshu Zhou, Jinyi Hu, HANLIN ZHANG, Xiaodan Liang, Maosong Sun, Chenyan Xiong, Jian Tang

In this paper, we develop a general framework for interpretable natural language understanding that requires only a small set of human annotated explanations for training.

Explanation Generation Natural Language Understanding

Iterative Graph Self-Distillation

no code implementations23 Oct 2020 HANLIN ZHANG, Shuai Lin, Weiyang Liu, Pan Zhou, Jian Tang, Xiaodan Liang, Eric P. Xing

How to discriminatively vectorize graphs is a fundamental challenge that attracts increasing attentions in recent years.

Contrastive Learning Graph Learning +1

Enabling Efficient Verifiable Fuzzy Keyword Search Over Encrypted Data in Cloud Computing

no code implementations journal 2018 XINRUI GE, JIA YU, Chengyu Hu, HANLIN ZHANG, AND RONG HAO

In searchable encryption, the cloud server might return the invalid result to data user for saving the computation cost or other reasons.

Cannot find the paper you are looking for? You can Submit a new open access paper.