Search Results for author: Juntao Li

Found 58 papers, 35 papers with code

Plan-CVAE: A Planning-based Conditional Variational Autoencoder for Story Generation

no code implementations CCL 2020 Lin Wang, Juntao Li, Rui Yan, Dongyan Zhao

Story generation is a challenging task of automatically creating natural languages to describe a sequence of events, which requires outputting text with not only a consistent topic but also novel wordings.

Diversity Story Generation

KeyB2: Selecting Key Blocks is Also Important for Long Document Ranking with Large Language Models

no code implementations9 Nov 2024 Minghan Li, Eric Gaussier, Juntao Li, Guodong Zhou

Comprehensive experiments on long-document datasets, including TREC 2019 DL, Robust04, and MLDR-zh, show that KeyB2 outperforms baselines like RankLLaMA and KeyB by reducing reranking time and GPU memory usage while enhancing retrieval performance, achieving new SOTA results on TREC 2019 DL with higher NDCG@10 and MAP scores.

Document Ranking Information Retrieval +1

LOGO -- Long cOntext aliGnment via efficient preference Optimization

1 code implementation24 Oct 2024 Zecheng Tang, Zechen Sun, Juntao Li, Qiaoming Zhu, Min Zhang

To overcome the GPU memory-bound issue caused by the long sequence, LOGO employs a reference-free preference optimization strategy and adopts a position synthesis method to construct the training data.

Language Modelling MMLU

Unleashing Reasoning Capability of LLMs via Scalable Question Synthesis from Scratch

1 code implementation24 Oct 2024 Yuyang Ding, Xinyu Shi, Xiaobo Liang, Juntao Li, Qiaoming Zhu, Min Zhang

The availability of high-quality data is one of the most important factors in improving the reasoning capability of LLMs.

Math Mathematical Reasoning

Beware of Calibration Data for Pruning Large Language Models

no code implementations23 Oct 2024 Yixin Ji, Yang Xiang, Juntao Li, Qingrong Xia, Ping Li, Xinyu Duan, Zhefeng Wang, Min Zhang

As large language models (LLMs) are widely applied across various fields, model compression has become increasingly crucial for reducing costs and improving inference efficiency.

Model Compression

Revealing and Mitigating the Local Pattern Shortcuts of Mamba

1 code implementation21 Oct 2024 Wangjie You, Zecheng Tang, Juntao Li, Lili Yao, Min Zhang

Large language models (LLMs) have advanced significantly due to the attention mechanism, but their quadratic complexity and linear memory demands limit their performance on long-context tasks.

Mamba State Space Models

L-CiteEval: Do Long-Context Models Truly Leverage Context for Responding?

1 code implementation3 Oct 2024 Zecheng Tang, Keyan Zhou, Juntao Li, Baibei Ji, Jianye Hou, Min Zhang

Long-context models (LCMs) have made remarkable strides in recent years, offering users great convenience for handling tasks that involve long context, such as document summarization.

8k Document Summarization +2

MemLong: Memory-Augmented Retrieval for Long Text Modeling

1 code implementation30 Aug 2024 Weijie Liu, Zecheng Tang, Juntao Li, Kehai Chen, Min Zhang

This work introduces MemLong: Memory-Augmented Retrieval for Long Text Generation, a method designed to enhance the capabilities of long-context language modeling by utilizing an external retriever for historical information retrieval.

4k Decoder +4

ConflictBank: A Benchmark for Evaluating the Influence of Knowledge Conflicts in LLM

1 code implementation22 Aug 2024 Zhaochen Su, Jun Zhang, Xiaoye Qu, Tong Zhu, Yanshu Li, Jiashuo Sun, Juntao Li, Min Zhang, Yu Cheng

Only a few research explored the conflicts between the inherent knowledge of LLMs and the retrieved contextual knowledge.

Misinformation

OPT-Tree: Speculative Decoding with Adaptive Draft Tree Structure

1 code implementation25 Jun 2024 Jikai Wang, Yi Su, Juntao Li, Qingrong Xia, Zi Ye, Xinyu Duan, Zhefeng Wang, Min Zhang

It searches the optimal tree structure that maximizes the mathematical expectation of the acceptance length in each decoding step.

Timo: Towards Better Temporal Reasoning for Language Models

1 code implementation20 Jun 2024 Zhaochen Su, Jun Zhang, Tong Zhu, Xiaoye Qu, Juntao Li, Min Zhang, Yu Cheng

Therefore, we propose a crucial question: Can we build a universal framework to handle a variety of temporal reasoning tasks?

Question Answering

A Survey on Human Preference Learning for Large Language Models

no code implementations17 Jun 2024 Ruili Jiang, Kehai Chen, Xuefeng Bai, Zhixuan He, Juntao Li, Muyun Yang, Tiejun Zhao, Liqiang Nie, Min Zhang

In this survey, we review the progress in exploring human preference learning for LLMs from a preference-centered perspective, covering the sources and formats of preference feedback, the modeling and usage of preference signals, as well as the evaluation of the aligned LLMs.

Demonstration Augmentation for Zero-shot In-context Learning

1 code implementation3 Jun 2024 Yi Su, Yunpeng Tai, Yixin Ji, Juntao Li, Bowen Yan, Min Zhang

Large Language Models (LLMs) have demonstrated an impressive capability known as In-context Learning (ICL), which enables them to acquire knowledge from textual demonstrations without the need for parameter updates.

In-Context Learning

Fennec: Fine-grained Language Model Evaluation and Correction Extended through Branching and Bridging

1 code implementation20 May 2024 Xiaobo Liang, Haoke Zhang, Helan Hu, Juntao Li, Jun Xu, Min Zhang

The rapid advancement of large language models has given rise to a plethora of applications across a myriad of real-world tasks, mainly centered on aligning with human intent.

Language Modelling

Feature-based Low-Rank Compression of Large Language Models via Bayesian Optimization

1 code implementation17 May 2024 Yixin Ji, Yang Xiang, Juntao Li, Wei Chen, Zhongyi Liu, Kehai Chen, Min Zhang

To address the challenges of low-rank compression in LLMs, we conduct empirical research on the low-rank characteristics of large models.

Bayesian Optimization Low-rank compression

OpenBA-V2: Reaching 77.3% High Compression Ratio with Fast Multi-Stage Pruning

1 code implementation9 May 2024 Dan Qiao, Yi Su, Pinzheng Wang, Jing Ye, Wenjing Xie, Yuechi Zhou, Yuyang Ding, Zecheng Tang, Jikai Wang, Yixin Ji, Yue Wang, Pei Guo, Zechen Sun, Zikang Zhang, Juntao Li, Pingfu Chao, Wenliang Chen, Guohong Fu, Guodong Zhou, Qiaoming Zhu, Min Zhang

Large Language Models (LLMs) have played an important role in many fields due to their powerful capabilities. However, their massive number of parameters leads to high deployment requirements and incurs significant inference costs, which impedes their practical applications.

Common Sense Reasoning named-entity-recognition +2

Rethinking Negative Instances for Generative Named Entity Recognition

2 code implementations26 Feb 2024 Yuyang Ding, Juntao Li, Pinzheng Wang, Zecheng Tang, Bowen Yan, Min Zhang

In the Named Entity Recognition (NER) task, recent advancements have seen the remarkable improvement of LLMs in a broad range of entity domains via instruction tuning, by adopting entity-centric schema.

named-entity-recognition Named Entity Recognition +2

StrokeNUWA: Tokenizing Strokes for Vector Graphic Synthesis

no code implementations30 Jan 2024 Zecheng Tang, Chenfei Wu, Zekai Zhang, Mingheng Ni, Shengming Yin, Yu Liu, Zhengyuan Yang, Lijuan Wang, Zicheng Liu, Juntao Li, Nan Duan

To leverage LLMs for visual synthesis, traditional methods convert raster image information into discrete grid tokens through specialized visual modules, while disrupting the model's ability to capture the true semantic representation of visual scenes.

Vector Graphics

Resolving Crash Bugs via Large Language Models: An Empirical Study

no code implementations16 Dec 2023 Xueying Du, Mingwei Liu, Juntao Li, Hanlin Wang, Xin Peng, Yiling Lou

Evaluating IntDiagSolver on multiple LLMs reveals consistent enhancement in the accuracy of crash bug resolution, including ChatGPT, Claude, and CodeLlama.

Language Modelling Large Language Model

KBioXLM: A Knowledge-anchored Biomedical Multilingual Pretrained Language Model

1 code implementation20 Nov 2023 Lei Geng, Xu Yan, Ziqiang Cao, Juntao Li, Wenjie Li, Sujian Li, Xinjie Zhou, Yang Yang, Jun Zhang

We achieve a biomedical multilingual corpus by incorporating three granularity knowledge alignments (entity, fact, and passage levels) into monolingual corpora.

Relation XLM-R

Beyond Hard Samples: Robust and Effective Grammatical Error Correction with Cycle Self-Augmenting

1 code implementation20 Oct 2023 Zecheng Tang, Kaifeng Qi, Juntao Li, Min Zhang

By leveraging the augmenting data from the GEC models themselves in the post-training process and introducing regularization data for cycle training, our proposed method can effectively improve the model robustness of well-trained GEC models with only a few more training epochs as an extra cost.

Adversarial Attack Grammatical Error Correction

G-SPEED: General SParse Efficient Editing MoDel

1 code implementation16 Oct 2023 Haoke Zhang, Yue Wang, Juntao Li, Xiabing Zhou, Min Zhang

Large Language Models~(LLMs) have demonstrated incredible capabilities in understanding, generating, and manipulating languages.

OpenBA: An Open-sourced 15B Bilingual Asymmetric seq2seq Model Pre-trained from Scratch

1 code implementation19 Sep 2023 Juntao Li, Zecheng Tang, Yuyang Ding, Pinzheng Wang, Pei Guo, Wangjie You, Dan Qiao, Wenliang Chen, Guohong Fu, Qiaoming Zhu, Guodong Zhou, Min Zhang

This report provides the main details to pre-train an analogous model, including pre-training data processing, Bilingual Flan data collection, the empirical observations that inspire our model architecture design, training objectives of different stages, and other enhancement techniques.

Belebele MMLU

LayoutNUWA: Revealing the Hidden Layout Expertise of Large Language Models

1 code implementation18 Sep 2023 Zecheng Tang, Chenfei Wu, Juntao Li, Nan Duan

Graphic layout generation, a growing research field, plays a significant role in user engagement and information perception.

Code Completion Code Generation

Harnessing the Power of David against Goliath: Exploring Instruction Data Generation without Using Closed-Source Models

no code implementations24 Aug 2023 Yue Wang, Xinrui Wang, Juntao Li, Jinxiong Chang, Qishen Zhang, Zhongyi Liu, Guannan Zhang, Min Zhang

Instruction tuning is instrumental in enabling Large Language Models~(LLMs) to follow user instructions to complete various open-domain tasks.

GameEval: Evaluating LLMs on Conversational Games

1 code implementation19 Aug 2023 Dan Qiao, Chenfei Wu, Yaobo Liang, Juntao Li, Nan Duan

In this paper, we propose GameEval, a novel approach to evaluating LLMs through goal-driven conversational games, overcoming the limitations of previous methods.

Question Answering

CMD: a framework for Context-aware Model self-Detoxification

3 code implementations16 Aug 2023 Zecheng Tang, Keyan Zhou, Juntao Li, Yuyang Ding, Pinzheng Wang, Bowen Yan, Rejie Hua, Min Zhang

In view of this, we introduce a Context-aware Model self-Detoxification~(CMD) framework that pays attention to both the context and the detoxification process, i. e., first detoxifying the context and then making the language model generate along the safe context.

Language Modelling

Can Diffusion Model Achieve Better Performance in Text Generation? Bridging the Gap between Training and Inference!

1 code implementation8 May 2023 Zecheng Tang, Pinzheng Wang, Keyan Zhou, Juntao Li, Ziqiang Cao, Min Zhang

Diffusion models have been successfully adapted to text generation tasks by mapping the discrete text into the continuous space.

Text Generation

Test-Time Adaptation with Perturbation Consistency Learning

no code implementations25 Apr 2023 Yi Su, Yixin Ji, Juntao Li, Hai Ye, Min Zhang

Accordingly, in this paper, we propose perturbation consistency learning (PCL), a simple test-time adaptation method to promote the model to make stable predictions for samples with distribution shifts.

Adversarial Robustness Pseudo Label +1

Gated Mechanism Enhanced Multi-Task Learning for Dialog Routing

no code implementations COLING 2022 Ziming Huang, Zhuoxuan Jiang, Ke Wang, Juntao Li, Shanshan Feng, Xian-Ling Mao

Although most existing methods can fulfil this requirement, they can only model single-source dialog data and cannot effectively capture the underlying knowledge of relations among data and subtasks.

Multi-Task Learning

RenewNAT: Renewing Potential Translation for Non-Autoregressive Transformer

no code implementations14 Mar 2023 Pei Guo, Yisheng Xiao, Juntao Li, Min Zhang

Non-autoregressive neural machine translation (NAT) models are proposed to accelerate the inference process while maintaining relatively high performance.

Machine Translation Translation

AMOM: Adaptive Masking over Masking for Conditional Masked Language Model

1 code implementation13 Mar 2023 Yisheng Xiao, Ruiyang Xu, Lijun Wu, Juntao Li, Tao Qin, Yan-Tie Liu, Min Zhang

Experiments on \textbf{3} different tasks (neural machine translation, summarization, and code generation) with \textbf{15} datasets in total confirm that our proposed simple method achieves significant performance improvement over the strong CMLM model.

Code Generation Decoder +3

SelfMix: Robust Learning Against Textual Label Noise with Self-Mixup Training

1 code implementation COLING 2022 Dan Qiao, Chenchen Dai, Yuyang Ding, Juntao Li, Qiang Chen, Wenliang Chen, Min Zhang

The conventional success of textual classification relies on annotated data, and the new paradigm of pre-trained language models (PLMs) still requires a few labeled data for downstream tasks.

text-classification Text Classification

Chinese grammatical error correction based on knowledge distillation

2 code implementations31 Jul 2022 Peng Xia, Yuechi Zhou, Ziyan Zhang, Zecheng Tang, Juntao Li

In view of the poor robustness of existing Chinese grammatical error correction models on attack test sets and large model parameters, this paper uses the method of knowledge distillation to compress model parameters and improve the anti-attack ability of the model.

Grammatical Error Correction Knowledge Distillation

A Survey on Non-Autoregressive Generation for Neural Machine Translation and Beyond

1 code implementation20 Apr 2022 Yisheng Xiao, Lijun Wu, Junliang Guo, Juntao Li, Min Zhang, Tao Qin, Tie-Yan Liu

While NAR generation can significantly accelerate inference speed for machine translation, the speedup comes at the cost of sacrificed translation accuracy compared to its counterpart, autoregressive (AR) generation.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +12

Image-text Retrieval: A Survey on Recent Research and Development

no code implementations28 Mar 2022 Min Cao, Shiping Li, Juntao Li, Liqiang Nie, Min Zhang

On top of this, the efficiency-focused study on the ITR system is introduced as the third perspective.

Image-text Retrieval Survey +1

CT4Rec: Simple yet Effective Consistency Training for Sequential Recommendation

2 code implementations13 Dec 2021 Chong Liu, Xiaoyang Liu, Rongqin Zheng, Lixin Zhang, Xiaobo Liang, Juntao Li, Lijun Wu, Min Zhang, Leyu Lin

State-of-the-art sequential recommendation models proposed very recently combine contrastive learning techniques for obtaining high-quality user representations.

Click-Through Rate Prediction Contrastive Learning +2

Building an Efficient and Effective Retrieval-based Dialogue System via Mutual Learning

no code implementations1 Oct 2021 Chongyang Tao, Jiazhan Feng, Chang Liu, Juntao Li, Xiubo Geng, Daxin Jiang

For this task, the adoption of pre-trained language models (such as BERT) has led to remarkable progress in a number of benchmarks.

Re-Ranking Retrieval

DM-CT: Consistency Training with Data and Model Perturbation

no code implementations29 Sep 2021 Xiaobo Liang, Runze Mao, Lijun Wu, Juntao Li, Weiqing Liu, Qing Li, Min Zhang

The common approach of consistency training is performed on the data-level, which typically utilizes the data augmentation strategy (or adversarial training) to make the predictions from the augmented input and the original input to be consistent, so that the model is more robust and attains better generalization ability.

Data Augmentation Image Classification +2

Are BERT Families Zero-Shot Learners? A Study on Their Potential and Limitations

no code implementations29 Sep 2021 Yue Wang, Lijun Wu, Xiaobo Liang, Juntao Li, Min Zhang

Starting from the resurgence of deep learning, language models (LMs) have never been so popular.

Learning to Organize a Bag of Words into Sentences with Neural Networks: An Empirical Study

no code implementations NAACL 2021 Chongyang Tao, Shen Gao, Juntao Li, Yansong Feng, Dongyan Zhao, Rui Yan

Sequential information, a. k. a., orders, is assumed to be essential for processing a sequence with recurrent neural network or convolutional neural network based encoders.

Sentence

Dialogue History Matters! Personalized Response Selectionin Multi-turn Retrieval-based Chatbots

no code implementations17 Mar 2021 Juntao Li, Chang Liu, Chongyang Tao, Zhangming Chan, Dongyan Zhao, Min Zhang, Rui Yan

To fill the gap between these up-to-date methods and the real-world applications, we incorporate user-specific dialogue history into the response selection and propose a personalized hybrid matching network (PHMN).

Representation Learning Retrieval +1

Unsupervised Domain Adaptation of a Pretrained Cross-Lingual Language Model

1 code implementation23 Nov 2020 Juntao Li, Ruidan He, Hai Ye, Hwee Tou Ng, Lidong Bing, Rui Yan

Experimental results show that our proposed method achieves significant performance improvements over the state-of-the-art pretrained cross-lingual language model in the CLCD setting.

Language Modelling Mutual Information Estimation +1

Feature Adaptation of Pre-Trained Language Models across Languages and Domains with Robust Self-Training

2 code implementations EMNLP 2020 Hai Ye, Qingyu Tan, Ruidan He, Juntao Li, Hwee Tou Ng, Lidong Bing

To improve the robustness of self-training, in this paper we present class-aware feature self-distillation (CFd) to learn discriminative features from PrLMs, in which PrLM features are self-distilled into a feature adaptation module and the features from the same class are more tightly clustered.

Text Classification Unsupervised Domain Adaptation

Solution Path Algorithm for Twin Multi-class Support Vector Machine

1 code implementation30 May 2020 Liuyuan Chen, Kanglei Zhou, Junchang Jing, Haiju Fan, Juntao Li

Next, Lagrangian multipliers are proved to be 1 as the regularization parameter approaches infinity, thus, a simple yet effective initialization algorithm is devised.

Binary Classification Classification +2

Cross-Lingual Low-Resource Set-to-Description Retrieval for Global E-Commerce

1 code implementation17 May 2020 Juntao Li, Chang Liu, Jian Wang, Lidong Bing, Hongsong Li, Xiaozhong Liu, Dongyan Zhao, Rui Yan

We manually collect a new and high-quality paired dataset, where each pair contains an unordered product attribute set in the source language and an informative product description in the target language.

Attribute Cross-Lingual Information Retrieval +1

Stick to the Facts: Learning towards a Fidelity-oriented E-Commerce Product Description Generation

no code implementations IJCNLP 2019 Zhangming Chan, Xiuying Chen, Yongliang Wang, Juntao Li, Zhiqiang Zhang, Kun Gai, Dongyan Zhao, Rui Yan

Different from other text generation tasks, in product description generation, it is of vital importance to generate faithful descriptions that stick to the product attribute information.

Attribute Decoder +1

Are Training Samples Correlated? Learning to Generate Dialogue Responses with Multiple References

no code implementations ACL 2019 Lisong Qiu, Juntao Li, Wei Bi, Dongyan Zhao, Rui Yan

Due to its potential applications, open-domain dialogue generation has become popular and achieved remarkable progress in recent years, but sometimes suffers from generic responses.

Dialogue Generation valid

Cannot find the paper you are looking for? You can Submit a new open access paper.