no code implementations • 14 Mar 2024 • Kai Xiong, Xiao Ding, Ting Liu, Bing Qin, Dongliang Xu, Qing Yang, Hongtao Liu, Yixin Cao
Large language models (LLMs) have developed impressive performance and strong explainability across various reasoning scenarios, marking a significant stride towards mimicking human-like intelligence.
no code implementations • 1 Mar 2024 • Xianzhen Luo, Qingfu Zhu, Zhiming Zhang, Xu Wang, Qing Yang, Dongliang Xu, Wanxiang Che
Presently, two dominant paradigms for collecting tuning data are natural-instruct (human-written) and self-instruct (automatically generated).
no code implementations • 16 Feb 2024 • Xianzhen Luo, Qingfu Zhu, Zhiming Zhang, Libo Qin, Xu Wang, Qing Yang, Dongliang Xu, Wanxiang Che
In this paper, we conduct comprehensive experiments on the programming languages used in PoT and find that no single language consistently delivers optimal performance across all tasks and models.
no code implementations • 16 Jan 2024 • Weixiang Zhao, Shilong Wang, Yulin Hu, Yanyan Zhao, Bing Qin, Xuanyu Zhang, Qing Yang, Dongliang Xu, Wanxiang Che
Existing methods devise the learning module to acquire task-specific knowledge with parameter-efficient tuning (PET) block and the selection module to pick out the corresponding one for the testing input, aiming at handling the challenges of catastrophic forgetting and knowledge transfer in CL.
no code implementations • 28 Dec 2023 • Liang Zhao, Xiaocheng Feng, Xiachong Feng, Dongliang Xu, Qing Yang, Hongtao Liu, Bing Qin, Ting Liu
In this survey, we present these advances towards length extrapolation in a unified notation from the perspective of PE.
no code implementations • 24 May 2023 • Zekun Wang, Jingchang Chen, Wangchunshu Zhou, Haichao Zhu, Jiafeng Liang, Liping Shan, Ming Liu, Dongliang Xu, Qing Yang, Bing Qin
Despite achieving remarkable performance on various vision-language tasks, Transformer-based Vision-Language Models (VLMs) suffer from redundancy in inputs and parameters, significantly hampering their efficiency in real-world applications.
1 code implementation • 19 May 2023 • Xuanyu Zhang, Qing Yang, Dongliang Xu
In recent years, pre-trained language models have undergone rapid development with the emergence of large-scale models.
no code implementations • 18 Apr 2022 • Xuanyu Zhang, Qing Yang, Dongliang Xu
Knowledge graph embedding (KGE) aims to learn continuous vectors of relations and entities in knowledge graph.
Ranked #8 on Link Property Prediction on ogbl-wikikg2
no code implementations • 12 Feb 2019 • Dongliang Xu, Bailing Wang, XiaoJiang Du, Xiaoyan Zhu, zhitao Guan, Xiaoyan Yu, Jingyu Liu
However, the advantages of convolutional neural networks depend on the data used by the training classifier, particularly the size of the training set.