Search Results for author: Qianglong Chen

Found 26 papers, 8 papers with code

VCBench: A Controllable Benchmark for Symbolic and Abstract Challenges in Video Cognition

no code implementations14 Nov 2024 Chenglin Li, Qianglong Chen, Zhi Li, Feng Tao, Yin Zhang

Recent advancements in Large Video-Language Models (LVLMs) have driven the development of benchmarks designed to assess cognitive abilities in video-based tasks.

Optimizing Instruction Synthesis: Effective Exploration of Evolutionary Space with Tree Search

no code implementations14 Oct 2024 Chenglin Li, Qianglong Chen, Zhi Li, Feng Tao, Yicheng Li, Hao Chen, Fei Yu, Yin Zhang

With tree search and evaluation models, it can efficiently guide each instruction to evolve into a high-quality form, aiding in instruction fine-tuning.

Instruction Following

Role-RL: Online Long-Context Processing with Role Reinforcement Learning for Distinct LLMs in Their Optimal Roles

no code implementations26 Sep 2024 Lewei He, Tianyu Shi, Pengran Huang, Bingzhi Chen, Qianglong Chen, JiaHui Pan

Large language models (LLMs) with long-context processing are still challenging because of their implementation complexity, training efficiency and data sparsity.

BeamAggR: Beam Aggregation Reasoning over Multi-source Knowledge for Multi-hop Question Answering

no code implementations28 Jun 2024 Zheng Chu, Jingchang Chen, Qianglong Chen, Haotian Wang, Kun Zhu, Xiyuan Du, Weijiang Yu, Ming Liu, Bing Qin

For composite questions, the LLM combines beam candidates, explores multiple reasoning paths through probabilistic aggregation, and prioritizes the most promising trajectory.

Multi-hop Question Answering Question Answering +1

An Information Bottleneck Perspective for Effective Noise Filtering on Retrieval-Augmented Generation

1 code implementation3 Jun 2024 Kun Zhu, Xiaocheng Feng, Xiyuan Du, Yuxuan Gu, Weijiang Yu, Haotian Wang, Qianglong Chen, Zheng Chu, Jingchang Chen, Bing Qin

Retrieval-augmented generation integrates the capabilities of large language models with relevant information retrieved from an extensive corpus, yet encounters challenges when confronted with real-world noisy data.

Answer Generation Question Answering +1

Divide-and-Conquer Meets Consensus: Unleashing the Power of Functions in Code Generation

no code implementations30 May 2024 Jingchang Chen, Hongxuan Tang, Zheng Chu, Qianglong Chen, Zekun Wang, Ming Liu, Bing Qin

To this end, we propose FunCoder, a code generation framework incorporating the divide-and-conquer strategy with functional consensus.

Code Generation HumanEval +1

Mixed Distillation Helps Smaller Language Model Better Reasoning

no code implementations17 Dec 2023 Chenglin Li, Qianglong Chen, Liangyue Li, Caiyu Wang, Yicheng Li, Zulong Chen, Yin Zhang

While large language models (LLMs) have demonstrated exceptional performance in recent natural language processing (NLP) tasks, their deployment poses substantial challenges due to high computational and memory demands in real-world applications.

Knowledge Distillation Language Modeling +2

Learning to Break: Knowledge-Enhanced Reasoning in Multi-Agent Debate System

2 code implementations8 Dec 2023 Haotian Wang, Xiyuan Du, Weijiang Yu, Qianglong Chen, Kun Zhu, Zheng Chu, Lian Yan, Yi Guan

First, we involve a shared retrieval knowledge pool in the debate process to solve the problem of limited and different knowledge backgrounds.

Retrieval

TimeBench: A Comprehensive Evaluation of Temporal Reasoning Abilities in Large Language Models

1 code implementation29 Nov 2023 Zheng Chu, Jingchang Chen, Qianglong Chen, Weijiang Yu, Haotian Wang, Ming Liu, Bing Qin

Grasping the concept of time is a fundamental facet of human cognition, indispensable for truly comprehending the intricacies of the world.

Trends in Integration of Knowledge and Large Language Models: A Survey and Taxonomy of Methods, Benchmarks, and Applications

no code implementations10 Nov 2023 Zhangyin Feng, Weitao Ma, Weijiang Yu, Lei Huang, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, Ting Liu

In this paper, we propose a review to discuss the trends in integration of knowledge and large language models, including taxonomy of methods, benchmarks, and applications.

knowledge editing Retrieval +1

Knowledge-enhanced Memory Model for Emotional Support Conversation

no code implementations11 Oct 2023 Mengzhao Jia, Qianglong Chen, Liqiang Jing, Dawei Fu, Renyu Li

The prevalence of mental disorders has become a significant issue, leading to the increased focus on Emotional Support Conversation as an effective supplement for mental health support.

model Response Generation

Large Language Models Are Also Good Prototypical Commonsense Reasoners

no code implementations22 Sep 2023 Chenin Li, Qianglong Chen, Yin Zhang, Yifei Zhang, Hongxiang Yao

Commonsense reasoning is a pivotal skill for large language models, yet it presents persistent challenges in specific tasks requiring this competence.

StrategyQA

WYWEB: A NLP Evaluation Benchmark For Classical Chinese

1 code implementation23 May 2023 Bo Zhou, Qianglong Chen, Tianyu Wang, Xiaomi Zhong, Yin Zhang

To fully evaluate the overall performance of different NLP models in a given domain, many evaluation benchmarks are proposed, such as GLUE, SuperGLUE and CLUE.

Machine Translation Natural Language Understanding +2

Distinguish Before Answer: Generating Contrastive Explanation as Knowledge for Commonsense Question Answering

no code implementations14 May 2023 Qianglong Chen, Guohai Xu, Ming Yan, Ji Zhang, Fei Huang, Luo Si, Yin Zhang

Existing knowledge-enhanced methods have achieved remarkable results in certain QA tasks via obtaining diverse knowledge from different knowledge bases.

Explanation Generation Question Answering

AMTSS: An Adaptive Multi-Teacher Single-Student Knowledge Distillation Framework For Multilingual Language Inference

no code implementations13 May 2023 Qianglong Chen, Feng Ji, Feng-Lin Li, Guohai Xu, Ming Yan, Ji Zhang, Yin Zhang

To support cost-effective language inference in multilingual settings, we propose AMTSS, an adaptive multi-teacher single-student distillation framework, which allows distilling knowledge from multiple teachers to a single student.

Knowledge Distillation

Task Difficulty Aware Parameter Allocation & Regularization for Lifelong Learning

1 code implementation CVPR 2023 Wenjin Wang, Yunqing Hu, Qianglong Chen, Yin Zhang

In this paper, we propose the Parameter Allocation & Regularization (PAR), which adaptively select an appropriate strategy for each task from parameter allocation and regularization based on its learning difficulty.

ERNIE-mmLayout: Multi-grained MultiModal Transformer for Document Understanding

no code implementations18 Sep 2022 Wenjin Wang, Zhengjie Huang, Bin Luo, Qianglong Chen, Qiming Peng, Yinxu Pan, Weichong Yin, Shikun Feng, Yu Sun, dianhai yu, Yin Zhang

At first, a document graph is proposed to model complex relationships among multi-grained multimodal elements, in which salient visual regions are detected by a cluster-based method.

Common Sense Reasoning document understanding +1

DictBERT: Dictionary Description Knowledge Enhanced Language Model Pre-training via Contrastive Learning

no code implementations1 Aug 2022 Qianglong Chen, Feng-Lin Li, Guohai Xu, Ming Yan, Ji Zhang, Yin Zhang

We evaluate our approach on a variety of knowledge driven and language understanding tasks, including NER, relation extraction, CommonsenseQA, OpenBookQA and GLUE.

Contrastive Learning Language Modeling +3

Semantic Sentence Composition Reasoning for Multi-Hop Question Answering

no code implementations1 Mar 2022 Qianglong Chen

Due to the lack of insufficient data, existing multi-hop open domain question answering systems require to effectively find out relevant supporting facts according to each question.

Multi-hop Question Answering Open-Domain Question Answering +3

K-AID: Enhancing Pre-trained Language Models with Domain Knowledge for Question Answering

no code implementations22 Sep 2021 Fu Sun, Feng-Lin Li, Ruize Wang, Qianglong Chen, Xingyi Cheng, Ji Zhang

Knowledge enhanced pre-trained language models (K-PLMs) are shown to be effective for many public tasks in the literature but few of them have been successfully applied in practice.

Knowledge Distillation Question Answering +4

KACE: Generating Knowledge Aware Contrastive Explanations for Natural Language Inference

no code implementations ACL 2021 Qianglong Chen, Feng Ji, Xiangji Zeng, Feng-Lin Li, Ji Zhang, Haiqing Chen, Yin Zhang

In order to better understand the reason behind model behaviors (i. e., making predictions), most recent works have exploited generative models to provide complementary explanations.

counterfactual Language Modelling +1

Improving Commonsense Question Answering by Graph-based Iterative Retrieval over Multiple Knowledge Sources

no code implementations COLING 2020 Qianglong Chen, Feng Ji, Haiqing Chen, Yin Zhang

More concretely, we first introduce a novel graph-based iterative knowledge retrieval module, which iteratively retrieves concepts and entities related to the given question and its choices from multiple knowledge sources.

Language Modeling Language Modelling +3

Cannot find the paper you are looking for? You can Submit a new open access paper.