Search Results for author: Chaojun Xiao

Found 29 papers, 17 papers with code

Robust and Scalable Model Editing for Large Language Models

1 code implementation26 Mar 2024 Yingfa Chen, Zhengyan Zhang, Xu Han, Chaojun Xiao, Zhiyuan Liu, Chen Chen, Kuai Li, Tao Yang, Maosong Sun

Large language models (LLMs) can make predictions using parametric knowledge--knowledge encoded in the model weights--or contextual knowledge--knowledge presented in the context.

Model Editing

Ouroboros: Speculative Decoding with Large Model Enhanced Drafting

1 code implementation21 Feb 2024 Weilin Zhao, Yuxiang Huang, Xu Han, Chaojun Xiao, Zhiyuan Liu, Maosong Sun

In this paper, we introduce Ouroboros, which constructs a phrase candidate pool from the verification process of LLMs to provide candidates for draft generation of the small model.

Text Generation

InfLLM: Unveiling the Intrinsic Capacity of LLMs for Understanding Extremely Long Sequences with Training-Free Memory

no code implementations7 Feb 2024 Chaojun Xiao, Pengle Zhang, Xu Han, Guangxuan Xiao, Yankai Lin, Zhengyan Zhang, Zhiyuan Liu, Song Han, Maosong Sun

To alleviate these issues, existing efforts employ sliding attention windows and discard distant tokens to achieve the processing of extremely long sequences.

ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs

no code implementations6 Feb 2024 Zhengyan Zhang, Yixin Song, Guanghui Yu, Xu Han, Yankai Lin, Chaojun Xiao, Chenyang Song, Zhiyuan Liu, Zeyu Mi, Maosong Sun

To find the most efficient activation function for sparse computation, we propose a systematic framework to examine the sparsity of LLMs from three aspects: the trade-off between sparsity and performance, the predictivity of sparsity, and the hardware affinity.

MUSER: A Multi-View Similar Case Retrieval Dataset

1 code implementation24 Oct 2023 Qingquan Li, Yiran Hu, Feng Yao, Chaojun Xiao, Zhiyuan Liu, Maosong Sun, Weixing Shen

Furthermore, the case similarities are typically measured solely by the textual semantics of the fact descriptions, which may fail to capture the full complexity of legal cases from the perspective of legal knowledge.

Fairness Retrieval +3

Variator: Accelerating Pre-trained Models with Plug-and-Play Compression Modules

1 code implementation24 Oct 2023 Chaojun Xiao, Yuqi Luo, Wenbin Zhang, Pengle Zhang, Xu Han, Yankai Lin, Zhengyan Zhang, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie zhou

Pre-trained language models (PLMs) have achieved remarkable results on NLP tasks but at the expense of huge parameter sizes and the consequent computational costs.

Computational Efficiency

Thoroughly Modeling Multi-domain Pre-trained Recommendation as Language

no code implementations20 Oct 2023 Zekai Qu, Ruobing Xie, Chaojun Xiao, Yuan YAO, Zhiyuan Liu, Fengzong Lian, Zhanhui Kang, Jie zhou

With the thriving of pre-trained language model (PLM) widely verified in various of NLP tasks, pioneer efforts attempt to explore the possible cooperation of the general textual information in PLM with the personalized behavioral information in user historical behavior sequences to enhance sequential recommendation (SR).

Informativeness Language Modelling +1

Plug-and-Play Document Modules for Pre-trained Models

1 code implementation28 May 2023 Chaojun Xiao, Zhengyan Zhang, Xu Han, Chi-Min Chan, Yankai Lin, Zhiyuan Liu, Xiangyang Li, Zhonghua Li, Zhao Cao, Maosong Sun

By inserting document plugins into the backbone PTM for downstream tasks, we can encode a document one time to handle multiple tasks, which is more efficient than conventional encoding-task coupling methods that simultaneously encode documents and input queries using task-specific encoders.

Question Answering

Plug-and-Play Knowledge Injection for Pre-trained Language Models

1 code implementation28 May 2023 Zhengyan Zhang, Zhiyuan Zeng, Yankai Lin, Huadong Wang, Deming Ye, Chaojun Xiao, Xu Han, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou

Experimental results on three knowledge-driven NLP tasks show that existing injection methods are not suitable for the new paradigm, while map-tuning effectively improves the performance of downstream models.

Emergent Modularity in Pre-trained Transformers

1 code implementation28 May 2023 Zhengyan Zhang, Zhiyuan Zeng, Yankai Lin, Chaojun Xiao, Xiaozhi Wang, Xu Han, Zhiyuan Liu, Ruobing Xie, Maosong Sun, Jie zhou

In analogy to human brains, we consider two main characteristics of modularity: (1) functional specialization of neurons: we evaluate whether each neuron is mainly specialized in a certain function, and find that the answer is yes.

LEVEN: A Large-Scale Chinese Legal Event Detection Dataset

1 code implementation Findings (ACL) 2022 Feng Yao, Chaojun Xiao, Xiaozhi Wang, Zhiyuan Liu, Lei Hou, Cunchao Tu, Juanzi Li, Yun Liu, Weixing Shen, Maosong Sun

However, existing Legal Event Detection (LED) datasets only concern incomprehensive event types and have limited annotated data, which restricts the development of LED methods and their downstream applications.

Event Detection Retrieval

CPM-2: Large-scale Cost-effective Pre-trained Language Models

2 code implementations20 Jun 2021 Zhengyan Zhang, Yuxian Gu, Xu Han, Shengqi Chen, Chaojun Xiao, Zhenbo Sun, Yuan YAO, Fanchao Qi, Jian Guan, Pei Ke, Yanzheng Cai, Guoyang Zeng, Zhixing Tan, Zhiyuan Liu, Minlie Huang, Wentao Han, Yang Liu, Xiaoyan Zhu, Maosong Sun

We present a suite of cost-effective techniques for the use of PLMs to deal with the efficiency issues of pre-training, fine-tuning, and inference.

Lawformer: A Pre-trained Language Model for Chinese Legal Long Documents

1 code implementation9 May 2021 Chaojun Xiao, Xueyu Hu, Zhiyuan Liu, Cunchao Tu, Maosong Sun

Legal artificial intelligence (LegalAI) aims to benefit legal systems with the technology of artificial intelligence, especially natural language processing (NLP).

Language Modelling Question Answering +2

Equality before the Law: Legal Judgment Consistency Analysis for Fairness

no code implementations25 Mar 2021 Yuzhong Wang, Chaojun Xiao, Shirong Ma, Haoxi Zhong, Cunchao Tu, Tianyang Zhang, Zhiyuan Liu, Maosong Sun

We propose to simulate judges from different groups with legal judgment prediction (LJP) models and measure the judicial inconsistency with the disagreement of the judgment results given by LJP models trained on different groups.

Fairness

UPRec: User-Aware Pre-training for Recommender Systems

no code implementations22 Feb 2021 Chaojun Xiao, Ruobing Xie, Yuan YAO, Zhiyuan Liu, Maosong Sun, Xu Zhang, Leyu Lin

Existing sequential recommendation methods rely on large amounts of training data and usually suffer from the data sparsity problem.

Self-Supervised Learning Sequential Recommendation

Denoising Relation Extraction from Document-level Distant Supervision

1 code implementation EMNLP 2020 Chaojun Xiao, Yuan YAO, Ruobing Xie, Xu Han, Zhiyuan Liu, Maosong Sun, Fen Lin, Leyu Lin

Distant supervision (DS) has been widely used to generate auto-labeled data for sentence-level relation extraction (RE), which improves RE performance.

Denoising Document-level Relation Extraction +2

Knowledge Transfer via Pre-training for Recommendation: A Review and Prospect

no code implementations19 Sep 2020 Zheni Zeng, Chaojun Xiao, Yuan YAO, Ruobing Xie, Zhiyuan Liu, Fen Lin, Leyu Lin, Maosong Sun

Recommender systems aim to provide item recommendations for users, and are usually faced with data sparsity problem (e. g., cold start) in real-world scenarios.

Recommendation Systems Transfer Learning

How Does NLP Benefit Legal System: A Summary of Legal Artificial Intelligence

2 code implementations ACL 2020 Haoxi Zhong, Chaojun Xiao, Cunchao Tu, Tianyang Zhang, Zhiyuan Liu, Maosong Sun

Legal Artificial Intelligence (LegalAI) focuses on applying the technology of artificial intelligence, especially natural language processing, to benefit tasks in the legal domain.

JEC-QA: A Legal-Domain Question Answering Dataset

no code implementations27 Nov 2019 Haoxi Zhong, Chaojun Xiao, Cunchao Tu, Tianyang Zhang, Zhiyuan Liu, Maosong Sun

We present JEC-QA, the largest question answering dataset in the legal domain, collected from the National Judicial Examination of China.

Question Answering Reading Comprehension

Adversarial Language Games for Advanced Natural Language Intelligence

no code implementations5 Nov 2019 Yuan Yao, Haoxi Zhong, Zhengyan Zhang, Xu Han, Xiaozhi Wang, Chaojun Xiao, Guoyang Zeng, Zhiyuan Liu, Maosong Sun

In this work, we propose a challenging adversarial language game called Adversarial Taboo as an example, in which an attacker and a defender compete around a target word.

Board Games

Overview of CAIL2018: Legal Judgment Prediction Competition

2 code implementations13 Oct 2018 Haoxi Zhong, Chaojun Xiao, Zhipeng Guo, Cunchao Tu, Zhiyuan Liu, Maosong Sun, Yansong Feng, Xianpei Han, Zhen Hu, Heng Wang, Jianfeng Xu

In this paper, we give an overview of the Legal Judgment Prediction (LJP) competition at Chinese AI and Law challenge (CAIL2018).

Legal Judgment Prediction via Topological Learning

1 code implementation EMNLP 2018 Haoxi Zhong, Zhipeng Guo, Cunchao Tu, Chaojun Xiao, Zhiyuan Liu, Maosong Sun

Legal Judgment Prediction (LJP) aims to predict the judgment result based on the facts of a case and becomes a promising application of artificial intelligence techniques in the legal field.

Multi-Task Learning Text Classification

CAIL2018: A Large-Scale Legal Dataset for Judgment Prediction

3 code implementations4 Jul 2018 Chaojun Xiao, Haoxi Zhong, Zhipeng Guo, Cunchao Tu, Zhiyuan Liu, Maosong Sun, Yansong Feng, Xianpei Han, Zhen Hu, Heng Wang, Jianfeng Xu

In this paper, we introduce the \textbf{C}hinese \textbf{AI} and \textbf{L}aw challenge dataset (CAIL2018), the first large-scale Chinese legal dataset for judgment prediction.

Text Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.