Search Results for author: Zhengyan Zhang

Found 41 papers, 29 papers with code

BMInf: An Efficient Toolkit for Big Model Inference and Tuning

1 code implementation ACL 2022 Xu Han, Guoyang Zeng, Weilin Zhao, Zhiyuan Liu, Zhengyan Zhang, Jie zhou, Jun Zhang, Jia Chao, Maosong Sun

In recent years, large-scale pre-trained language models (PLMs) containing billions of parameters have achieved promising results on various NLP tasks.

Quantization Scheduling

Robust and Scalable Model Editing for Large Language Models

1 code implementation26 Mar 2024 Yingfa Chen, Zhengyan Zhang, Xu Han, Chaojun Xiao, Zhiyuan Liu, Chen Chen, Kuai Li, Tao Yang, Maosong Sun

Large language models (LLMs) can make predictions using parametric knowledge--knowledge encoded in the model weights--or contextual knowledge--knowledge presented in the context.

Model Editing

ProSparse: Introducing and Enhancing Intrinsic Activation Sparsity within Large Language Models

1 code implementation21 Feb 2024 Chenyang Song, Xu Han, Zhengyan Zhang, Shengding Hu, Xiyu Shi, Kuai Li, Chen Chen, Zhiyuan Liu, Guangli Li, Tao Yang, Maosong Sun

Some recent efforts have explored introducing ReLU or its variants as the substitutive activation function to help LLMs achieve activation sparsity and inference acceleration, but few can simultaneously obtain high sparsity and comparable model performance.

InfLLM: Unveiling the Intrinsic Capacity of LLMs for Understanding Extremely Long Sequences with Training-Free Memory

no code implementations7 Feb 2024 Chaojun Xiao, Pengle Zhang, Xu Han, Guangxuan Xiao, Yankai Lin, Zhengyan Zhang, Zhiyuan Liu, Song Han, Maosong Sun

To alleviate these issues, existing efforts employ sliding attention windows and discard distant tokens to achieve the processing of extremely long sequences.

ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs

no code implementations6 Feb 2024 Zhengyan Zhang, Yixin Song, Guanghui Yu, Xu Han, Yankai Lin, Chaojun Xiao, Chenyang Song, Zhiyuan Liu, Zeyu Mi, Maosong Sun

To find the most efficient activation function for sparse computation, we propose a systematic framework to examine the sparsity of LLMs from three aspects: the trade-off between sparsity and performance, the predictivity of sparsity, and the hardware affinity.

Variator: Accelerating Pre-trained Models with Plug-and-Play Compression Modules

1 code implementation24 Oct 2023 Chaojun Xiao, Yuqi Luo, Wenbin Zhang, Pengle Zhang, Xu Han, Yankai Lin, Zhengyan Zhang, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie zhou

Pre-trained language models (PLMs) have achieved remarkable results on NLP tasks but at the expense of huge parameter sizes and the consequent computational costs.

Computational Efficiency

CPET: Effective Parameter-Efficient Tuning for Compressed Large Language Models

no code implementations15 Jul 2023 Weilin Zhao, Yuxiang Huang, Xu Han, Zhiyuan Liu, Zhengyan Zhang, Maosong Sun

Parameter-efficient tuning (PET) has been widely explored in recent years because it tunes much fewer parameters (PET modules) than full-parameter fine-tuning (FT) while still stimulating sufficient knowledge from large language models (LLMs) for downstream tasks.

Plug-and-Play Document Modules for Pre-trained Models

1 code implementation28 May 2023 Chaojun Xiao, Zhengyan Zhang, Xu Han, Chi-Min Chan, Yankai Lin, Zhiyuan Liu, Xiangyang Li, Zhonghua Li, Zhao Cao, Maosong Sun

By inserting document plugins into the backbone PTM for downstream tasks, we can encode a document one time to handle multiple tasks, which is more efficient than conventional encoding-task coupling methods that simultaneously encode documents and input queries using task-specific encoders.

Question Answering

Plug-and-Play Knowledge Injection for Pre-trained Language Models

1 code implementation28 May 2023 Zhengyan Zhang, Zhiyuan Zeng, Yankai Lin, Huadong Wang, Deming Ye, Chaojun Xiao, Xu Han, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou

Experimental results on three knowledge-driven NLP tasks show that existing injection methods are not suitable for the new paradigm, while map-tuning effectively improves the performance of downstream models.

Emergent Modularity in Pre-trained Transformers

1 code implementation28 May 2023 Zhengyan Zhang, Zhiyuan Zeng, Yankai Lin, Chaojun Xiao, Xiaozhi Wang, Xu Han, Zhiyuan Liu, Ruobing Xie, Maosong Sun, Jie zhou

In analogy to human brains, we consider two main characteristics of modularity: (1) functional specialization of neurons: we evaluate whether each neuron is mainly specialized in a certain function, and find that the answer is yes.

UNTER: A Unified Knowledge Interface for Enhancing Pre-trained Language Models

no code implementations2 May 2023 Deming Ye, Yankai Lin, Zhengyan Zhang, Maosong Sun

In this paper, we propose a UNified knowledge inTERface, UNTER, to provide a unified perspective to exploit both structured knowledge and unstructured knowledge.

Entity Typing named-entity-recognition +2

Bridging the Language Gap: Knowledge Injected Multilingual Question Answering

no code implementations6 Apr 2023 Zhichao Duan, Xiuxing Li, Zhengyan Zhang, Zhenyu Li, Ning Liu, Jianyong Wang

As a popular topic in natural language processing tasks, extractive question answering task (extractive QA) has gained extensive attention in the past few years.

Cross-Lingual Transfer Extractive Question-Answering +3

READIN: A Chinese Multi-Task Benchmark with Realistic and Diverse Input Noises

1 code implementation14 Feb 2023 Chenglei Si, Zhengyan Zhang, Yingfa Chen, Xiaozhi Wang, Zhiyuan Liu, Maosong Sun

In order to fill this important gap, we construct READIN: a Chinese multi-task benchmark with REalistic And Diverse Input Noises.

Data Augmentation Fairness +2

Finding Skill Neurons in Pre-trained Transformer-based Language Models

1 code implementation14 Nov 2022 Xiaozhi Wang, Kaiyue Wen, Zhengyan Zhang, Lei Hou, Zhiyuan Liu, Juanzi Li

Furthermore, we demonstrate the skill neurons are most likely generated in pre-training rather than fine-tuning by showing that the skill neurons found with prompt tuning are also crucial for other fine-tuning methods freezing neuron weights, such as the adapter-based tuning and BitFit.

Network Pruning

Automatic Label Sequence Generation for Prompting Sequence-to-sequence Models

1 code implementation COLING 2022 Zichun Yu, Tianyu Gao, Zhengyan Zhang, Yankai Lin, Zhiyuan Liu, Maosong Sun, Jie zhou

Prompting, which casts downstream applications as language modeling tasks, has shown to be sample efficient compared to standard fine-tuning with pre-trained models.

Few-Shot Learning Language Modelling +1

Effective Few-Shot Named Entity Linking by Meta-Learning

1 code implementation12 Jul 2022 Xiuxing Li, Zhenyu Li, Zhengyan Zhang, Ning Liu, Haitao Yuan, Wei zhang, Zhiyuan Liu, Jianyong Wang

In this paper, we endeavor to solve the problem of few-shot entity linking, which only requires a minimal amount of in-domain labeled data and is more practical in real situations.

Entity Linking Knowledge Base Completion +2

A Unified Understanding of Deep NLP Models for Text Classification

no code implementations19 Jun 2022 Zhen Li, Xiting Wang, Weikai Yang, Jing Wu, Zhengyan Zhang, Zhiyuan Liu, Maosong Sun, HUI ZHANG, Shixia Liu

The rapid development of deep natural language processing (NLP) models for text classification has led to an urgent need for a unified understanding of these models proposed individually.

text-classification Text Classification

Prompt Tuning for Discriminative Pre-trained Language Models

1 code implementation Findings (ACL) 2022 Yuan YAO, Bowen Dong, Ao Zhang, Zhengyan Zhang, Ruobing Xie, Zhiyuan Liu, Leyu Lin, Maosong Sun, Jianyong Wang

Recent works have shown promising results of prompt tuning in stimulating pre-trained language models (PLMs) for natural language processing (NLP) tasks.

Language Modelling Question Answering +2

CPT: Colorful Prompt Tuning for Pre-trained Vision-Language Models

1 code implementation24 Sep 2021 Yuan YAO, Ao Zhang, Zhengyan Zhang, Zhiyuan Liu, Tat-Seng Chua, Maosong Sun

Pre-Trained Vision-Language Models (VL-PTMs) have shown promising capabilities in grounding natural language in image data, facilitating a broad variety of cross-modal tasks.

Visual Grounding

CPM-2: Large-scale Cost-effective Pre-trained Language Models

2 code implementations20 Jun 2021 Zhengyan Zhang, Yuxian Gu, Xu Han, Shengqi Chen, Chaojun Xiao, Zhenbo Sun, Yuan YAO, Fanchao Qi, Jian Guan, Pei Ke, Yanzheng Cai, Guoyang Zeng, Zhixing Tan, Zhiyuan Liu, Minlie Huang, Wentao Han, Yang Liu, Xiaoyan Zhu, Maosong Sun

We present a suite of cost-effective techniques for the use of PLMs to deal with the efficiency issues of pre-training, fine-tuning, and inference.

Sub-Character Tokenization for Chinese Pretrained Language Models

2 code implementations1 Jun 2021 Chenglei Si, Zhengyan Zhang, Yingfa Chen, Fanchao Qi, Xiaozhi Wang, Zhiyuan Liu, Yasheng Wang, Qun Liu, Maosong Sun

2) Pronunciation-based SubChar tokenizers can encode Chinese homophones into the same transliteration sequences and produce the same tokenization output, hence being robust to homophone typos.

Chinese Word Segmentation Computational Efficiency +2

Knowledge Inheritance for Pre-trained Language Models

2 code implementations NAACL 2022 Yujia Qin, Yankai Lin, Jing Yi, Jiajie Zhang, Xu Han, Zhengyan Zhang, Yusheng Su, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou

Specifically, we introduce a pre-training framework named "knowledge inheritance" (KI) and explore how could knowledge distillation serve as auxiliary supervision during pre-training to efficiently learn larger PLMs.

Domain Adaptation Knowledge Distillation +2

Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger

2 code implementations ACL 2021 Fanchao Qi, Mukai Li, Yangyi Chen, Zhengyan Zhang, Zhiyuan Liu, Yasheng Wang, Maosong Sun

As far as we know, almost all existing textual backdoor attack methods insert additional contents into normal samples as triggers, which causes the trigger-embedded samples to be detected and the backdoor attacks to be blocked without much effort.

Backdoor Attack

CSS-LM: A Contrastive Framework for Semi-supervised Fine-tuning of Pre-trained Language Models

1 code implementation7 Feb 2021 Yusheng Su, Xu Han, Yankai Lin, Zhengyan Zhang, Zhiyuan Liu, Peng Li, Jie zhou, Maosong Sun

We then perform contrastive semi-supervised learning on both the retrieved unlabeled and original labeled instances to help PLMs capture crucial task-related semantic features.

Red Alarm for Pre-trained Models: Universal Vulnerability to Neuron-Level Backdoor Attacks

1 code implementation ICML Workshop AML 2021 Zhengyan Zhang, Guangxuan Xiao, Yongwei Li, Tian Lv, Fanchao Qi, Zhiyuan Liu, Yasheng Wang, Xin Jiang, Maosong Sun

In this work, we demonstrate the universal vulnerability of PTMs, where fine-tuned PTMs can be easily controlled by backdoor attacks in arbitrary downstream tasks.

Backdoor Attack

Better Robustness by More Coverage: Adversarial Training with Mixup Augmentation for Robust Fine-tuning

1 code implementation31 Dec 2020 Chenglei Si, Zhengyan Zhang, Fanchao Qi, Zhiyuan Liu, Yasheng Wang, Qun Liu, Maosong Sun

In this work, we propose a simple and effective method to cover a much larger proportion of the attack search space, called Adversarial and Mixup Data Augmentation (AMDA).

Adversarial Robustness Text Augmentation +2

Know What You Don't Need: Single-Shot Meta-Pruning for Attention Heads

no code implementations7 Nov 2020 Zhengyan Zhang, Fanchao Qi, Zhiyuan Liu, Qun Liu, Maosong Sun

To measure the informativeness of attention heads, we train our Single-Shot Meta-Pruner (SMP) with a meta-learning paradigm aiming to maintain the distribution of text representations after pruning.

Informativeness Meta-Learning +1

CokeBERT: Contextual Knowledge Selection and Embedding towards Enhanced Pre-Trained Language Models

1 code implementation29 Sep 2020 Yusheng Su, Xu Han, Zhengyan Zhang, Peng Li, Zhiyuan Liu, Yankai Lin, Jie zhou, Maosong Sun

In this paper, we propose a novel framework named Coke to dynamically select contextual knowledge and embed knowledge context according to textual context for PLMs, which can avoid the effect of redundant and ambiguous knowledge in KGs that cannot match the input text.

Knowledge Graphs

Train No Evil: Selective Masking for Task-Guided Pre-Training

1 code implementation EMNLP 2020 Yuxian Gu, Zhengyan Zhang, Xiaozhi Wang, Zhiyuan Liu, Maosong Sun

In this stage, the model is trained by masked language modeling on in-domain unsupervised data to learn domain-specific patterns and we propose a novel selective masking strategy to learn task-specific patterns.

Language Modelling Masked Language Modeling +1

Adversarial Language Games for Advanced Natural Language Intelligence

1 code implementation5 Nov 2019 Yuan Yao, Haoxi Zhong, Zhengyan Zhang, Xu Han, Xiaozhi Wang, Chaojun Xiao, Guoyang Zeng, Zhiyuan Liu, Maosong Sun

In this work, we propose a challenging adversarial language game called Adversarial Taboo as an example, in which an attacker and a defender compete around a target word.

Board Games

ERNIE: Enhanced Language Representation with Informative Entities

2 code implementations ACL 2019 Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, Qun Liu

Neural language representation models such as BERT pre-trained on large-scale corpora can well capture rich semantic patterns from plain text, and be fine-tuned to consistently improve the performance of various NLP tasks.

Entity Linking Entity Typing +6

Graph Neural Networks: A Review of Methods and Applications

5 code implementations20 Dec 2018 Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, LiFeng Wang, Changcheng Li, Maosong Sun

Lots of learning tasks require dealing with graph data which contains rich relation information among elements.

Graph Attention

A Unified Framework for Community Detection and Network Representation Learning

no code implementations21 Nov 2016 Cunchao Tu, Xiangkai Zeng, Hao Wang, Zhengyan Zhang, Zhiyuan Liu, Maosong Sun, Bo Zhang, Leyu Lin

Network representation learning (NRL) aims to learn low-dimensional vectors for vertices in a network.

Social and Information Networks Physics and Society

Cannot find the paper you are looking for? You can Submit a new open access paper.