Search Results for author: Chenxin An

Found 11 papers, 6 papers with code

Training-Free Long-Context Scaling of Large Language Models

1 code implementation27 Feb 2024 Chenxin An, Fei Huang, Jun Zhang, Shansan Gong, Xipeng Qiu, Chang Zhou, Lingpeng Kong

The ability of Large Language Models (LLMs) to process and generate coherent text is markedly weakened when the number of input tokens exceeds their pretraining length.

16k

Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective

no code implementations17 Oct 2023 Ming Zhong, Chenxin An, Weizhu Chen, Jiawei Han, Pengcheng He

In this paper, we seek to empirically investigate knowledge transfer from larger to smaller models through a parametric perspective.

Transfer Learning

Scaling Laws of RoPE-based Extrapolation

1 code implementation8 Oct 2023 Xiaoran Liu, Hang Yan, Shuo Zhang, Chenxin An, Xipeng Qiu, Dahua Lin

The extrapolation capability of Large Language Models (LLMs) based on Rotary Position Embedding is currently a topic of considerable interest.

16k

L-Eval: Instituting Standardized Evaluation for Long Context Language Models

3 code implementations20 Jul 2023 Chenxin An, Shansan Gong, Ming Zhong, Xingjian Zhao, Mukai Li, Jun Zhang, Lingpeng Kong, Xipeng Qiu

Recently, there has been growing interest in extending the context length of large language models (LLMs), aiming to effectively process long inputs of one turn or conversations with more extensive histories.

Instruction Following

Optimizing Non-Autoregressive Transformers with Contrastive Learning

no code implementations23 May 2023 Chenxin An, Jiangtao Feng, Fei Huang, Xipeng Qiu, Lingpeng Kong

In this paper, we propose to ease the difficulty of modality learning via sampling from the model distribution instead of the data distribution.

Contrastive Learning Machine Translation +2

COLO: A Contrastive Learning based Re-ranking Framework for One-Stage Summarization

1 code implementation COLING 2022 Chenxin An, Ming Zhong, Zhiyong Wu, Qin Zhu, Xuanjing Huang, Xipeng Qiu

Traditional training paradigms for extractive and abstractive summarization systems always only use token-level or sentence-level training objectives.

Abstractive Text Summarization Contrastive Learning +2

CoNT: Contrastive Neural Text Generation

2 code implementations29 May 2022 Chenxin An, Jiangtao Feng, Kai Lv, Lingpeng Kong, Xipeng Qiu, Xuanjing Huang

We validate CoNT on five generation tasks with ten benchmarks, including machine translation, summarization, code comment generation, data-to-text generation and commonsense generation.

Code Comment Generation Comment Generation +4

$\mathcal{Y}$-Tuning: An Efficient Tuning Paradigm for Large-Scale Pre-Trained Models via Label Representation Learning

no code implementations20 Feb 2022 Yitao Liu, Chenxin An, Xipeng Qiu

With the success of large-scale pre-trained models (PTMs), how efficiently adapting PTMs to downstream tasks has attracted tremendous attention, especially for PTMs with billions of parameters.

Representation Learning

TURNER: The Uncertainty-based Retrieval Framework for Chinese NER

no code implementations18 Feb 2022 Zhichao Geng, Hang Yan, Zhangyue Yin, Chenxin An, Xipeng Qiu

Chinese NER is a difficult undertaking due to the ambiguity of Chinese characters and the absence of word boundaries.

General Knowledge NER +1

Enhancing Scientific Papers Summarization with Citation Graph

1 code implementation7 Apr 2021 Chenxin An, Ming Zhong, Yiran Chen, Danqing Wang, Xipeng Qiu, Xuanjing Huang

Previous work for text summarization in scientific domain mainly focused on the content of the input document, but seldom considering its citation network.

Text Summarization

Cannot find the paper you are looking for? You can Submit a new open access paper.