Search Results for author: Dongliang Xu

Found 10 papers, 2 papers with code

MoGU: A Framework for Enhancing Safety of Open-Sourced LLMs While Preserving Their Usability

1 code implementation23 May 2024 Yanrui Du, Sendong Zhao, Danyang Zhao, Ming Ma, Yuhan Chen, Liangyu Huo, Qing Yang, Dongliang Xu, Bing Qin

When encountering malicious instructions, the router will assign a higher weight to the safe LLM to ensure that responses are harmless.

Meaningful Learning: Advancing Abstract Reasoning in Large Language Models via Generic Fact Guidance

no code implementations14 Mar 2024 Kai Xiong, Xiao Ding, Ting Liu, Bing Qin, Dongliang Xu, Qing Yang, Hongtao Liu, Yixin Cao

Large language models (LLMs) have developed impressive performance and strong explainability across various reasoning scenarios, marking a significant stride towards mimicking human-like intelligence.


Semi-Instruct: Bridging Natural-Instruct and Self-Instruct for Code Large Language Models

no code implementations1 Mar 2024 Xianzhen Luo, Qingfu Zhu, Zhiming Zhang, Xu Wang, Qing Yang, Dongliang Xu, Wanxiang Che

Presently, two dominant paradigms for collecting tuning data are natural-instruct (human-written) and self-instruct (automatically generated).

Program Synthesis

MultiPoT: Multilingual Program of Thoughts Harnesses Multiple Programming Languages

no code implementations16 Feb 2024 Xianzhen Luo, Qingfu Zhu, Zhiming Zhang, Libo Qin, Xu Wang, Qing Yang, Dongliang Xu, Wanxiang Che

In this paper, we conduct comprehensive experiments on the programming languages used in PoT and find that no single language consistently delivers optimal performance across all tasks and models.

SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language Models

no code implementations16 Jan 2024 Weixiang Zhao, Shilong Wang, Yulin Hu, Yanyan Zhao, Bing Qin, Xuanyu Zhang, Qing Yang, Dongliang Xu, Wanxiang Che

Existing methods devise the learning module to acquire task-specific knowledge with parameter-efficient tuning (PET) block and the selection module to pick out the corresponding one for the testing input, aiming at handling the challenges of catastrophic forgetting and knowledge transfer in CL.

Continual Learning Transfer Learning

Length Extrapolation of Transformers: A Survey from the Perspective of Positional Encoding

no code implementations28 Dec 2023 Liang Zhao, Xiaocheng Feng, Xiachong Feng, Dongliang Xu, Qing Yang, Hongtao Liu, Bing Qin, Ting Liu

In this survey, we present these advances towards length extrapolation in a unified notation from the perspective of PE.


SmartTrim: Adaptive Tokens and Attention Pruning for Efficient Vision-Language Models

no code implementations24 May 2023 Zekun Wang, Jingchang Chen, Wangchunshu Zhou, Haichao Zhu, Jiafeng Liang, Liping Shan, Ming Liu, Dongliang Xu, Qing Yang, Bing Qin

Despite achieving remarkable performance on various vision-language tasks, Transformer-based Vision-Language Models (VLMs) suffer from redundancy in inputs and parameters, significantly hampering their efficiency in real-world applications.

Data Augmentation

XuanYuan 2.0: A Large Chinese Financial Chat Model with Hundreds of Billions Parameters

1 code implementation19 May 2023 Xuanyu Zhang, Qing Yang, Dongliang Xu

In recent years, pre-trained language models have undergone rapid development with the emergence of large-scale models.

Verification Code Recognition Based on Active and Deep Learning

no code implementations12 Feb 2019 Dongliang Xu, Bailing Wang, XiaoJiang Du, Xiaoyan Zhu, zhitao Guan, Xiaoyan Yu, Jingyu Liu

However, the advantages of convolutional neural networks depend on the data used by the training classifier, particularly the size of the training set.

Cannot find the paper you are looking for? You can Submit a new open access paper.