Search Results for author: Qianglong Chen

Found 20 papers, 6 papers with code

Mixed Distillation Helps Smaller Language Model Better Reasoning

no code implementations17 Dec 2023 Chenglin Li, Qianglong Chen, Liangyue Li, Caiyu Wang, Yicheng Li, Zulong Chen, Yin Zhang

While large language models (LLMs) have demonstrated exceptional performance in recent natural language processing (NLP) tasks, their deployment poses substantial challenges due to high computational and memory demands in real-world applications.

Knowledge Distillation Language Modelling

Apollo's Oracle: Retrieval-Augmented Reasoning in Multi-Agent Debates

no code implementations8 Dec 2023 Haotian Wang, Xiyuan Du, Weijiang Yu, Qianglong Chen, Kun Zhu, Zheng Chu, Lian Yan, Yi Guan

Addressing the challenge of cognitive constraints, we introduce a novel framework, the Multi-Agent Debate with Retrieval Augmented (MADRA).

Retrieval

TimeBench: A Comprehensive Evaluation of Temporal Reasoning Abilities in Large Language Models

1 code implementation29 Nov 2023 Zheng Chu, Jingchang Chen, Qianglong Chen, Weijiang Yu, Haotian Wang, Ming Liu, Bing Qin

Understanding time is a pivotal aspect of human cognition, crucial in the broader framework of grasping the intricacies of the world.

Trends in Integration of Knowledge and Large Language Models: A Survey and Taxonomy of Methods, Benchmarks, and Applications

no code implementations10 Nov 2023 Zhangyin Feng, Weitao Ma, Weijiang Yu, Lei Huang, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, Ting Liu

In this paper, we propose a review to discuss the trends in integration of knowledge and large language models, including taxonomy of methods, benchmarks, and applications.

knowledge editing Retrieval

A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions

1 code implementation9 Nov 2023 Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, Ting Liu

The emergence of large language models (LLMs) has marked a significant breakthrough in natural language processing (NLP), leading to remarkable advancements in text understanding and generation.

Hallucination

Knowledge-enhanced Memory Model for Emotional Support Conversation

no code implementations11 Oct 2023 Mengzhao Jia, Qianglong Chen, Liqiang Jing, Dawei Fu, Renyu Li

The prevalence of mental disorders has become a significant issue, leading to the increased focus on Emotional Support Conversation as an effective supplement for mental health support.

Response Generation

A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future

1 code implementation27 Sep 2023 Zheng Chu, Jingchang Chen, Qianglong Chen, Weijiang Yu, Tao He, Haotian Wang, Weihua Peng, Ming Liu, Bing Qin, Ting Liu

Chain-of-thought reasoning, a cognitive process fundamental to human intelligence, has garnered significant attention in the realm of artificial intelligence and natural language processing.

Large Language Models Are Also Good Prototypical Commonsense Reasoners

no code implementations22 Sep 2023 Chenin Li, Qianglong Chen, Yin Zhang, Yifei Zhang, Hongxiang Yao

Commonsense reasoning is a pivotal skill for large language models, yet it presents persistent challenges in specific tasks requiring this competence.

StrategyQA

WYWEB: A NLP Evaluation Benchmark For Classical Chinese

1 code implementation23 May 2023 Bo Zhou, Qianglong Chen, Tianyu Wang, Xiaomi Zhong, Yin Zhang

To fully evaluate the overall performance of different NLP models in a given domain, many evaluation benchmarks are proposed, such as GLUE, SuperGLUE and CLUE.

Machine Translation Natural Language Understanding +2

Distinguish Before Answer: Generating Contrastive Explanation as Knowledge for Commonsense Question Answering

no code implementations14 May 2023 Qianglong Chen, Guohai Xu, Ming Yan, Ji Zhang, Fei Huang, Luo Si, Yin Zhang

Existing knowledge-enhanced methods have achieved remarkable results in certain QA tasks via obtaining diverse knowledge from different knowledge bases.

Explanation Generation Question Answering

AMTSS: An Adaptive Multi-Teacher Single-Student Knowledge Distillation Framework For Multilingual Language Inference

no code implementations13 May 2023 Qianglong Chen, Feng Ji, Feng-Lin Li, Guohai Xu, Ming Yan, Ji Zhang, Yin Zhang

To support cost-effective language inference in multilingual settings, we propose AMTSS, an adaptive multi-teacher single-student distillation framework, which allows distilling knowledge from multiple teachers to a single student.

Knowledge Distillation

Task Difficulty Aware Parameter Allocation & Regularization for Lifelong Learning

1 code implementation CVPR 2023 Wenjin Wang, Yunqing Hu, Qianglong Chen, Yin Zhang

In this paper, we propose the Parameter Allocation & Regularization (PAR), which adaptively select an appropriate strategy for each task from parameter allocation and regularization based on its learning difficulty.

ERNIE-mmLayout: Multi-grained MultiModal Transformer for Document Understanding

no code implementations18 Sep 2022 Wenjin Wang, Zhengjie Huang, Bin Luo, Qianglong Chen, Qiming Peng, Yinxu Pan, Weichong Yin, Shikun Feng, Yu Sun, dianhai yu, Yin Zhang

At first, a document graph is proposed to model complex relationships among multi-grained multimodal elements, in which salient visual regions are detected by a cluster-based method.

Common Sense Reasoning document understanding +1

DictBERT: Dictionary Description Knowledge Enhanced Language Model Pre-training via Contrastive Learning

no code implementations1 Aug 2022 Qianglong Chen, Feng-Lin Li, Guohai Xu, Ming Yan, Ji Zhang, Yin Zhang

We evaluate our approach on a variety of knowledge driven and language understanding tasks, including NER, relation extraction, CommonsenseQA, OpenBookQA and GLUE.

Contrastive Learning Language Modelling +2

Semantic Sentence Composition Reasoning for Multi-Hop Question Answering

no code implementations1 Mar 2022 Qianglong Chen

Due to the lack of insufficient data, existing multi-hop open domain question answering systems require to effectively find out relevant supporting facts according to each question.

Multi-hop Question Answering Open-Domain Question Answering +3

K-AID: Enhancing Pre-trained Language Models with Domain Knowledge for Question Answering

no code implementations22 Sep 2021 Fu Sun, Feng-Lin Li, Ruize Wang, Qianglong Chen, Xingyi Cheng, Ji Zhang

Knowledge enhanced pre-trained language models (K-PLMs) are shown to be effective for many public tasks in the literature but few of them have been successfully applied in practice.

Knowledge Distillation Question Answering +4

KACE: Generating Knowledge Aware Contrastive Explanations for Natural Language Inference

no code implementations ACL 2021 Qianglong Chen, Feng Ji, Xiangji Zeng, Feng-Lin Li, Ji Zhang, Haiqing Chen, Yin Zhang

In order to better understand the reason behind model behaviors (i. e., making predictions), most recent works have exploited generative models to provide complementary explanations.

counterfactual Language Modelling +1

Improving Commonsense Question Answering by Graph-based Iterative Retrieval over Multiple Knowledge Sources

no code implementations COLING 2020 Qianglong Chen, Feng Ji, Haiqing Chen, Yin Zhang

More concretely, we first introduce a novel graph-based iterative knowledge retrieval module, which iteratively retrieves concepts and entities related to the given question and its choices from multiple knowledge sources.

Language Modelling Natural Language Understanding +2

Cannot find the paper you are looking for? You can Submit a new open access paper.