Search Results for author: HANLIN ZHANG

Found 20 papers, 9 papers with code

Enabling Efficient Verifiable Fuzzy Keyword Search Over Encrypted Data in Cloud Computing

no code implementations journal 2018 XINRUI GE, JIA YU, Chengyu Hu, HANLIN ZHANG, AND RONG HAO

In searchable encryption, the cloud server might return the invalid result to data user for saving the computation cost or other reasons.

Cloud Computing

Iterative Graph Self-Distillation

no code implementations23 Oct 2020 HANLIN ZHANG, Shuai Lin, Weiyang Liu, Pan Zhou, Jian Tang, Xiaodan Liang, Eric P. Xing

Recently, there has been increasing interest in the challenge of how to discriminatively vectorize graphs.

Contrastive Learning Graph Learning +1

Towards Interpretable Natural Language Understanding with Explanations as Latent Variables

1 code implementation NeurIPS 2020 Wangchunshu Zhou, Jinyi Hu, HANLIN ZHANG, Xiaodan Liang, Maosong Sun, Chenyan Xiong, Jian Tang

In this paper, we develop a general framework for interpretable natural language understanding that requires only a small set of human annotated explanations for training.

Explanation Generation Natural Language Understanding

Toward Learning Human-aligned Cross-domain Robust Models by Countering Misaligned Features

1 code implementation5 Nov 2021 Haohan Wang, Zeyi Huang, HANLIN ZHANG, Yong Jae Lee, Eric Xing

Machine learning has demonstrated remarkable prediction accuracy over i. i. d data, but the accuracy often drops when tested with data from another distribution.

BIG-bench Machine Learning

Towards Principled Disentanglement for Domain Generalization

1 code implementation CVPR 2022 HANLIN ZHANG, Yi-Fan Zhang, Weiyang Liu, Adrian Weller, Bernhard Schölkopf, Eric P. Xing

To tackle this challenge, we first formalize the OOD generalization problem as constrained optimization, called Disentanglement-constrained Domain Generalization (DDG).

Disentanglement Domain Generalization

Stochastic Neural Networks with Infinite Width are Deterministic

no code implementations30 Jan 2022 Liu Ziyin, HANLIN ZHANG, Xiangming Meng, Yuting Lu, Eric Xing, Masahito Ueda

This work theoretically studies stochastic neural networks, a main type of neural network in use.

Exploring Transformer Backbones for Heterogeneous Treatment Effect Estimation

1 code implementation2 Feb 2022 Yi-Fan Zhang, HANLIN ZHANG, Zachary C. Lipton, Li Erran Li, Eric P. Xing

Previous works on Treatment Effect Estimation (TEE) are not in widespread use because they are predominantly theoretical, where strong parametric assumptions are made but untractable for practical application.

POS Selection bias

Gradient Aligned Attacks via a Few Queries

no code implementations19 May 2022 Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao

Specifically, we propose a gradient aligned mechanism to ensure that the derivatives of the loss function with respect to the logit vector have the same weight coefficients between the surrogate and victim models.

FACM: Intermediate Layer Still Retain Effective Features against Adversarial Examples

no code implementations2 Jun 2022 Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao

To enhance the robustness of the classifier, in our paper, a \textbf{F}eature \textbf{A}nalysis and \textbf{C}onditional \textbf{M}atching prediction distribution (FACM) model is proposed to utilize the features of intermediate layers to correct the classification.

Improving the Robustness and Generalization of Deep Neural Network with Confidence Threshold Reduction

no code implementations2 Jun 2022 Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao

The empirical and theoretical analysis demonstrates that the MDL loss improves the robustness and generalization of the model simultaneously for natural training.

A Closer Look at the Calibration of Differentially Private Learners

no code implementations15 Oct 2022 HANLIN ZHANG, Xuechen Li, Prithviraj Sen, Salim Roukos, Tatsunori Hashimoto

Across 7 tasks, temperature scaling and Platt scaling with DP-SGD result in an average 3. 1-fold reduction in the in-domain expected calibration error and only incur at most a minor percent drop in accuracy.

The Impact of Symbolic Representations on In-context Learning for Few-shot Reasoning

1 code implementation16 Dec 2022 HANLIN ZHANG, Yi-Fan Zhang, Li Erran Li, Eric Xing

Pre-trained language models (LMs) have shown remarkable reasoning performance using explanations (or ``chain-of-thought'' (CoT)) for in-context learning.

In-Context Learning

Fuzziness-tuned: Improving the Transferability of Adversarial Examples

no code implementations17 Mar 2023 Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao

In this paper, we first systematically investigated this issue and found that the enormous difference of attack success rates between the surrogate model and victim model is caused by the existence of a special area (known as fuzzy domain in our paper), in which the adversarial examples in the area are classified wrongly by the surrogate model while correctly by the victim model.

Improving the Transferability of Adversarial Examples via Direction Tuning

2 code implementations27 Mar 2023 Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao

Although considerable efforts have been developed on improving the transferability of adversarial examples generated by transfer-based adversarial attacks, our investigation found that, the big deviation between the actual and steepest update directions of the current transfer-based adversarial attacks is caused by the large update step length, resulting in the generated adversarial examples can not converge well.

Network Pruning

Improved Logical Reasoning of Language Models via Differentiable Symbolic Programming

1 code implementation5 May 2023 HANLIN ZHANG, Jiani Huang, Ziyang Li, Mayur Naik, Eric Xing

We propose DSR-LM, a Differentiable Symbolic Reasoning framework where pre-trained LMs govern the perception of factual knowledge, and a symbolic module performs deductive reasoning.

Logical Reasoning

DeepHEN: quantitative prediction essential lncRNA genes and rethinking essentialities of lncRNA genes

no code implementations18 Sep 2023 HANLIN ZHANG, Wenzheng Cheng

Compared to other methods for predicting the essentiality of lncRNA genes, our DeepHEN model not only tells whether sequence features or network spatial features have a greater influence on essentiality but also addresses the overfitting issue of those methods caused by the low number of essential lncRNA genes, as evidenced by the results of enrichment analysis.

Representation Learning

Watermarks in the Sand: Impossibility of Strong Watermarking for Generative Models

1 code implementation7 Nov 2023 HANLIN ZHANG, Benjamin L. Edelman, Danilo Francati, Daniele Venturi, Giuseppe Ateniese, Boaz Barak

To prove this result, we introduce a generic efficient watermark attack; the attacker is not required to know the private key of the scheme or even which scheme is used.

A Study on the Calibration of In-context Learning

no code implementations7 Dec 2023 HANLIN ZHANG, Yi-Fan Zhang, Yaodong Yu, Dhruv Madeka, Dean Foster, Eric Xing, Himabindu Lakkaraju, Sham Kakade

Accurate uncertainty quantification is crucial for the safe deployment of language models (LMs), and prior research has demonstrated improvements in the calibration of modern LMs.

In-Context Learning Natural Language Understanding +1

Follow My Instruction and Spill the Beans: Scalable Data Extraction from Retrieval-Augmented Generation Systems

no code implementations27 Feb 2024 Zhenting Qi, HANLIN ZHANG, Eric Xing, Sham Kakade, Himabindu Lakkaraju

Retrieval-Augmented Generation (RAG) improves pre-trained models by incorporating external knowledge at test time to enable customized adaptation.

Instruction Following Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.