Search Results for author: Wanjun Zhong

Found 41 papers, 25 papers with code

Analytical Reasoning of Text

1 code implementation Findings (NAACL) 2022 Wanjun Zhong, Siyuan Wang, Duyu Tang, Zenan Xu, Daya Guo, Yining Chen, Jiahai Wang, Jian Yin, Ming Zhou, Nan Duan

In this paper, we study the challenge of analytical reasoning of text and collect a new dataset consisting of questions from the Law School Admission Test from 1991 to 2016.

Concise and Precise Context Compression for Tool-Using Language Models

no code implementations2 Jul 2024 Yang Xu, Yunlong Feng, Honglin Mu, Yutai Hou, Yitong Li, Xinghao Wang, Wanjun Zhong, Zhongyang Li, Dandan Tu, Qingfu Zhu, Min Zhang, Wanxiang Che

However, when compressing tool documentation, existing methods suffer from the weaknesses of key information loss (specifically, tool/parameter name errors) and difficulty in adjusting the length of compressed sequences based on documentation lengths.

CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models

1 code implementation6 Mar 2024 Zexuan Qiu, Jingjing Li, Shijue Huang, Wanjun Zhong, Irwin King

Developing Large Language Models (LLMs) with robust long-context capabilities has been the recent research focus, resulting in the emergence of long-context LLMs proficient in Chinese.

Learning to Edit: Aligning LLMs with Knowledge Editing

1 code implementation19 Feb 2024 Yuxin Jiang, YuFei Wang, Chuhan Wu, Wanjun Zhong, Xingshan Zeng, Jiahui Gao, Liangyou Li, Xin Jiang, Lifeng Shang, Ruiming Tang, Qun Liu, Wei Wang

Knowledge editing techniques, aiming to efficiently modify a minor proportion of knowledge in large language models (LLMs) without negatively impacting performance across other inputs, have garnered widespread attention.

knowledge editing Philosophy

Planning, Creation, Usage: Benchmarking LLMs for Comprehensive Tool Utilization in Real-World Complex Scenarios

1 code implementation30 Jan 2024 Shijue Huang, Wanjun Zhong, Jianqiao Lu, Qi Zhu, Jiahui Gao, Weiwen Liu, Yutai Hou, Xingshan Zeng, Yasheng Wang, Lifeng Shang, Xin Jiang, Ruifeng Xu, Qun Liu

The recent trend of using Large Language Models (LLMs) as tool agents in real-world applications underscores the necessity for comprehensive evaluations of their capabilities, particularly in complex scenarios involving planning, creating, and using tools.


YODA: Teacher-Student Progressive Learning for Language Models

no code implementations28 Jan 2024 Jianqiao Lu, Wanjun Zhong, YuFei Wang, Zhijiang Guo, Qi Zhu, Wenyong Huang, Yanlin Wang, Fei Mi, Baojun Wang, Yasheng Wang, Lifeng Shang, Xin Jiang, Qun Liu

With the teacher's guidance, the student learns to iteratively refine its answer with feedback, and forms a robust and comprehensive understanding of the posed questions.

GSM8K Math

G-LLaVA: Solving Geometric Problem with Multi-Modal Large Language Model

1 code implementation18 Dec 2023 Jiahui Gao, Renjie Pi, Jipeng Zhang, Jiacheng Ye, Wanjun Zhong, YuFei Wang, Lanqing Hong, Jianhua Han, Hang Xu, Zhenguo Li, Lingpeng Kong

We first analyze the limitations of current Multimodal Large Language Models (MLLMs) in this area: they struggle to accurately comprehending basic geometric elements and their relationships.

Language Modelling Large Language Model

FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models

1 code implementation31 Oct 2023 Yuxin Jiang, YuFei Wang, Xingshan Zeng, Wanjun Zhong, Liangyou Li, Fei Mi, Lifeng Shang, Xin Jiang, Qun Liu, Wei Wang

To fill this research gap, in this paper, we propose FollowBench, a Multi-level Fine-grained Constraints Following Benchmark for LLMs.

Instruction Following

Adaptive-Solver Framework for Dynamic Strategy Selection in Large Language Model Reasoning

no code implementations1 Oct 2023 Jianpeng Zhou, Wanjun Zhong, Yanlin Wang, Jiahai Wang

Experimental results from complex reasoning tasks reveal that the prompting method adaptation and decomposition granularity adaptation enhance performance across all tasks.

Computational Efficiency Language Modelling +2

SELF: Self-Evolution with Language Feedback

no code implementations1 Oct 2023 Jianqiao Lu, Wanjun Zhong, Wenyong Huang, YuFei Wang, Qi Zhu, Fei Mi, Baojun Wang, Weichao Wang, Xingshan Zeng, Lifeng Shang, Xin Jiang, Qun Liu

SELF initiates with a meta-skill learning process that equips the LLMs with capabilities for self-feedback and self-refinement.

Language Modelling Large Language Model

Exploring Iterative Enhancement for Improving Learnersourced Multiple-Choice Question Explanations with Large Language Models

1 code implementation19 Sep 2023 Qiming Bao, Juho Leinonen, Alex Yuxuan Peng, Wanjun Zhong, Gaël Gendron, Timothy Pistotti, Alice Huang, Paul Denny, Michael Witbrock, Jiamou Liu

When learnersourcing multiple-choice questions, creating explanations for the solution of a question is a crucial step; it helps other students understand the solution and promotes a deeper understanding of related concepts.

Explanation Generation Language Modelling +2

Aligning Large Language Models with Human: A Survey

1 code implementation24 Jul 2023 YuFei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu

(2) Training methodologies: a detailed review of the prevailing training methods employed for LLM alignment.

GroundNLQ @ Ego4D Natural Language Queries Challenge 2023

1 code implementation27 Jun 2023 Zhijian Hou, Lei Ji, Difei Gao, Wanjun Zhong, Kun Yan, Chao Li, Wing-Kwong Chan, Chong-Wah Ngo, Nan Duan, Mike Zheng Shou

Motivated by this, we leverage a two-stage pre-training strategy to train egocentric feature extractors and the grounding model on video narrations, and further fine-tune the model on annotated data.

Natural Language Queries

MemoryBank: Enhancing Large Language Models with Long-Term Memory

1 code implementation17 May 2023 Wanjun Zhong, Lianghong Guo, Qiqi Gao, He Ye, Yanlin Wang

To mimic anthropomorphic behaviors and selectively preserve memory, MemoryBank incorporates a memory updating mechanism, inspired by the Ebbinghaus Forgetting Curve theory, which permits the AI to forget and reinforce memory based on time elapsed and the relative significance of the memory, thereby offering a human-like memory mechanism.


AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models

2 code implementations13 Apr 2023 Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, Nan Duan

Impressively, GPT-4 surpasses average human performance on SAT, LSAT, and math competitions, attaining a 95% accuracy rate on the SAT Math test and a 92. 5% accuracy on the English test of the Chinese national college entrance exam.

Decision Making Math

Disentangling Reasoning Capabilities from Language Models with Compositional Reasoning Transformers

no code implementations20 Oct 2022 Wanjun Zhong, Tingting Ma, Jiahai Wang, Jian Yin, Tiejun Zhao, Chin-Yew Lin, Nan Duan

This paper presents ReasonFormer, a unified reasoning framework for mirroring the modular and compositional reasoning process of humans in complex decision-making.

Decision Making

Mixed-modality Representation Learning and Pre-training for Joint Table-and-Text Retrieval in OpenQA

1 code implementation11 Oct 2022 JunJie Huang, Wanjun Zhong, Qian Liu, Ming Gong, Daxin Jiang, Nan Duan

However, training an effective dense table-text retriever is difficult due to the challenges of table-text discrepancy and data sparsity problem.

Open-Domain Question Answering Representation Learning +1

CONE: An Efficient COarse-to-fiNE Alignment Framework for Long Video Temporal Grounding

1 code implementation22 Sep 2022 Zhijian Hou, Wanjun Zhong, Lei Ji, Difei Gao, Kun Yan, Wing-Kwong Chan, Chong-Wah Ngo, Zheng Shou, Nan Duan

This paper tackles an emerging and challenging problem of long video temporal grounding~(VTG) that localizes video moments related to a natural language (NL) query.

Contrastive Learning Video Grounding

Improving Task Generalization via Unified Schema Prompt

no code implementations5 Aug 2022 Wanjun Zhong, Yifan Gao, Ning Ding, Zhiyuan Liu, Ming Zhou, Jiahai Wang, Jian Yin, Nan Duan

Task generalization has been a long standing challenge in Natural Language Processing (NLP).

LogiGAN: Learning Logical Reasoning via Adversarial Pre-training

1 code implementation18 May 2022 Xinyu Pi, Wanjun Zhong, Yan Gao, Nan Duan, Jian-Guang Lou

We present LogiGAN, an unsupervised adversarial pre-training framework for improving logical reasoning abilities of language models.

Logical Reasoning Sentence

Modeling Semantic Composition with Syntactic Hypergraph for Video Question Answering

no code implementations13 May 2022 Zenan Xu, Wanjun Zhong, Qinliang Su, Zijing Ou, Fuwei Zhang

A key challenge in video question answering is how to realize the cross-modal semantic alignment between textual concepts and corresponding visual objects.

Question Answering Semantic Composition +1

ProQA: Structural Prompt-based Pre-training for Unified Question Answering

1 code implementation NAACL 2022 Wanjun Zhong, Yifan Gao, Ning Ding, Yujia Qin, Zhiyuan Liu, Ming Zhou, Jiahai Wang, Jian Yin, Nan Duan

Furthermore, ProQA exhibits strong ability in both continual learning and transfer learning by taking the advantages of the structural prompt.

Continual Learning Few-Shot Learning +2

Reasoning over Hybrid Chain for Table-and-Text Open Domain QA

no code implementations15 Jan 2022 Wanjun Zhong, JunJie Huang, Qian Liu, Ming Zhou, Jiahai Wang, Jian Yin, Nan Duan

CARP utilizes hybrid chain to model the explicit intermediate reasoning process across table and text for question answering.

Open-Domain Question Answering

AR-LSAT: Investigating Analytical Reasoning of Text

1 code implementation14 Apr 2021 Wanjun Zhong, Siyuan Wang, Duyu Tang, Zenan Xu, Daya Guo, Jiahai Wang, Jian Yin, Ming Zhou, Nan Duan

Analytical reasoning is an essential and challenging task that requires a system to analyze a scenario involving a set of particular circumstances and perform reasoning over it to make conclusions.

Syntax-Enhanced Pre-trained Model

1 code implementation ACL 2021 Zenan Xu, Daya Guo, Duyu Tang, Qinliang Su, Linjun Shou, Ming Gong, Wanjun Zhong, Xiaojun Quan, Nan Duan, Daxin Jiang

We study the problem of leveraging the syntactic structure of text to enhance pre-trained models such as BERT and RoBERTa.

Entity Typing Question Answering +1

Neural Deepfake Detection with Factual Structure of Text

1 code implementation EMNLP 2020 Wanjun Zhong, Duyu Tang, Zenan Xu, Ruize Wang, Nan Duan, Ming Zhou, Jiahai Wang, Jian Yin

To address this, we propose a graph-based model that utilizes the factual structure of a document for deepfake detection of text.

DeepFake Detection Face Swapping +2

Reasoning Over Semantic-Level Graph for Fact Checking

no code implementations ACL 2020 Wanjun Zhong, Jingjing Xu, Duyu Tang, Zenan Xu, Nan Duan, Ming Zhou, Jiahai Wang, Jian Yin

We evaluate our system on FEVER, a benchmark dataset for fact checking, and find that rich structural information is helpful and both our graph-based mechanisms improve the accuracy.

Claim Verification Fact Checking +5

Improving Question Answering by Commonsense-Based Pre-Training

no code implementations5 Sep 2018 Wanjun Zhong, Duyu Tang, Nan Duan, Ming Zhou, Jiahai Wang, Jian Yin

Although neural network approaches achieve remarkable success on a variety of NLP tasks, many of them struggle to answer questions that require commonsense knowledge.

Question Answering

Cannot find the paper you are looking for? You can Submit a new open access paper.