Search Results for author: Zhijiang Guo

Found 36 papers, 25 papers with code

The Fact Extraction and VERification Over Unstructured and Structured information (FEVEROUS) Shared Task

no code implementations EMNLP (FEVER) 2021 Rami Aly, Zhijiang Guo, Michael Sejr Schlichtkrull, James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Oana Cocarascu, Arpit Mittal

The Fact Extraction and VERification Over Unstructured and Structured information (FEVEROUS) shared task, asks participating systems to determine whether human-authored claims are Supported or Refuted based on evidence retrieved from Wikipedia (or NotEnoughInfo if the claim cannot be verified).


MR-BEN: A Comprehensive Meta-Reasoning Benchmark for Large Language Models

no code implementations20 Jun 2024 Zhongshen Zeng, Yinhong Liu, Yingjia Wan, Jingyao Li, Pengguang Chen, Jianbo Dai, Yuxuan Yao, Rongwu Xu, Zehan Qi, Wanru Zhao, Linling Shen, Jianqiao Lu, Haochen Tan, Yukang Chen, Hao Zhang, Zhan Shi, Bailin Wang, Zhijiang Guo, Jiaya Jia

Large language models (LLMs) have shown increasing capability in problem-solving and decision-making, largely based on the step-by-step chain-of-thought reasoning processes.

Decision Making

Process-Driven Autoformalization in Lean 4

1 code implementation4 Jun 2024 Jianqiao Lu, Zhengying Liu, Yingjia Wan, Yinya Huang, Haiming Wang, Zhicheng Yang, Jing Tang, Zhijiang Guo

Autoformalization, the conversion of natural language mathematics into formal languages, offers significant potential for advancing mathematical reasoning.

Mathematical Reasoning

CtrlA: Adaptive Retrieval-Augmented Generation via Probe-Guided Control

1 code implementation29 May 2024 Huanshuo Liu, Hao Zhang, Zhijiang Guo, Kuicai Dong, Xiangyang Li, Yi Quan Lee, Cong Zhang, Yong liu

Specifically, CtrlA employs an honesty probe to regulate the LLM's behavior by manipulating its representations for increased honesty, and a confidence probe to monitor the internal states of LLM and assess confidence levels, determining the retrieval necessity during generation.

RAG Response Generation +1

AutoCV: Empowering Reasoning with Automated Process Labeling via Confidence Variation

1 code implementation27 May 2024 Jianqiao Lu, Zhiyang Dou, Hongru Wang, Zeyu Cao, Jianbo Dai, Yingjia Wan, Yinya Huang, Zhijiang Guo

We experimentally validate that the confidence variations learned by the verification model trained on the final answer correctness can effectively identify errors in the reasoning steps.

MHPP: Exploring the Capabilities and Limitations of Language Models Beyond Basic Code Generation

1 code implementation19 May 2024 Jianbo Dai, Jianqiao Lu, Yunlong Feng, Rongju Ruan, Ming Cheng, Haochen Tan, Zhijiang Guo

Our study analyzed two common benchmarks, HumanEval and MBPP, and found that these might not thoroughly evaluate LLMs' code generation capacities due to limitations in quality, difficulty, and granularity.

Code Generation

HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning

1 code implementation30 Apr 2024 Chunlin Tian, Zhan Shi, Zhijiang Guo, Li Li, Chengzhong Xu

Through a series of experiments, we have uncovered two critical insights that shed light on the training and parameter inefficiency of LoRA.

Learning From Correctness Without Prompting Makes LLM Efficient Reasoner

1 code implementation28 Mar 2024 Yuxuan Yao, Han Wu, Zhijiang Guo, Biyan Zhou, Jiahui Gao, Sichun Luo, Hanxu Hou, Xiaojin Fu, Linqi Song

Large language models (LLMs) have demonstrated outstanding performance across various tasks, yet they still exhibit limitations such as hallucination, unfaithful reasoning, and toxic content.


Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators

1 code implementation25 Mar 2024 Yinhong Liu, Han Zhou, Zhijiang Guo, Ehsan Shareghi, Ivan Vulić, Anna Korhonen, Nigel Collier

Large Language Models (LLMs) have demonstrated promising capabilities as automatic evaluators in assessing the quality of generated natural language.

Language Modelling Large Language Model

Knowledge Conflicts for LLMs: A Survey

1 code implementation13 Mar 2024 Rongwu Xu, Zehan Qi, Zhijiang Guo, Cunxiang Wang, Hongru Wang, Yue Zhang, Wei Xu

This survey provides an in-depth analysis of knowledge conflicts for large language models (LLMs), highlighting the complex challenges they encounter when blending contextual and parametric knowledge.


Evaluating Robustness of Generative Search Engine on Adversarial Factual Questions

no code implementations25 Feb 2024 Xuming Hu, Xiaochuan Li, Junzhe Chen, Yinghui Li, Yangning Li, Xiaoguang Li, Yasheng Wang, Qun Liu, Lijie Wen, Philip S. Yu, Zhijiang Guo

To this end, we propose evaluating the robustness of generative search engines in the realistic and high-risk setting, where adversaries have only black-box system access and seek to deceive the model into returning incorrect responses.


YODA: Teacher-Student Progressive Learning for Language Models

no code implementations28 Jan 2024 Jianqiao Lu, Wanjun Zhong, YuFei Wang, Zhijiang Guo, Qi Zhu, Wenyong Huang, Yanlin Wang, Fei Mi, Baojun Wang, Yasheng Wang, Lifeng Shang, Xin Jiang, Qun Liu

With the teacher's guidance, the student learns to iteratively refine its answer with feedback, and forms a robust and comprehensive understanding of the posed questions.

GSM8K Math

Do We Need Language-Specific Fact-Checking Models? The Case of Chinese

no code implementations27 Jan 2024 Caiqi Zhang, Zhijiang Guo, Andreas Vlachos

This paper investigates the potential benefits of language-specific fact-checking models, focusing on the case of Chinese.

Evidence Selection Fact Checking +4

Do Large Language Models Know about Facts?

no code implementations8 Oct 2023 Xuming Hu, Junzhe Chen, Xiaochuan Li, Yufei Guo, Lijie Wen, Philip S. Yu, Zhijiang Guo

Large language models (LLMs) have recently driven striking performance improvements across a range of natural language processing tasks.

Question Answering Text Generation

DQ-LoRe: Dual Queries with Low Rank Approximation Re-ranking for In-Context Learning

1 code implementation4 Oct 2023 Jing Xiong, Zixuan Li, Chuanyang Zheng, Zhijiang Guo, Yichun Yin, Enze Xie, Zhicheng Yang, Qingxing Cao, Haiming Wang, Xiongwei Han, Jing Tang, Chengming Li, Xiaodan Liang

Dual Queries first query LLM to obtain LLM-generated knowledge such as CoT, then query the retriever to obtain the final exemplars via both question and the knowledge.

Dimensionality Reduction In-Context Learning +1

Multimodal Relation Extraction with Cross-Modal Retrieval and Synthesis

no code implementations25 May 2023 Xuming Hu, Zhijiang Guo, Zhiyang Teng, Irwin King, Philip S. Yu

Multimodal relation extraction (MRE) is the task of identifying the semantic relationships between two entities based on the context of the sentence image pair.

Cross-Modal Retrieval Object +4

Multimodal Automated Fact-Checking: A Survey

1 code implementation22 May 2023 Mubashara Akhtar, Michael Schlichtkrull, Zhijiang Guo, Oana Cocarascu, Elena Simperl, Andreas Vlachos

In this survey, we conceptualise a framework for AFC including subtasks unique to multimodal misinformation.

Fact Checking Misinformation

AVeriTeC: A Dataset for Real-world Claim Verification with Evidence from the Web

1 code implementation NeurIPS 2023 Michael Schlichtkrull, Zhijiang Guo, Andreas Vlachos

Existing datasets for automated fact-checking have substantial limitations, such as relying on artificial claims, lacking annotations for evidence and intermediate reasoning, or including evidence published after the claim.

Claim Verification Fact Checking +1

Read it Twice: Towards Faithfully Interpretable Fact Verification by Revisiting Evidence

1 code implementation2 May 2023 Xuming Hu, Zhaochen Hong, Zhijiang Guo, Lijie Wen, Philip S. Yu

In light of this, we propose a fact verification model named ReRead to retrieve evidence and verify claim that: (1) Train the evidence retriever to obtain interpretable evidence (i. e., faithfulness and plausibility criteria); (2) Train the claim verifier to revisit the evidence retrieved by the optimized evidence retriever to improve the accuracy.

Claim Verification Decision Making +1

GRATIS: Deep Learning Graph Representation with Task-specific Topology and Multi-dimensional Edge Features

1 code implementation19 Nov 2022 Siyang Song, Yuxin Song, Cheng Luo, Zhiyuan Song, Selim Kuzucu, Xi Jia, Zhijiang Guo, Weicheng Xie, Linlin Shen, Hatice Gunes

Our framework is effective, robust and flexible, and is a plug-and-play module that can be combined with different backbones and Graph Neural Networks (GNNs) to generate a task-specific graph representation from various graph and non-graph data.

Graph Representation Learning

METS-CoV: A Dataset of Medical Entity and Targeted Sentiment on COVID-19 Related Tweets

1 code implementation28 Sep 2022 Peilin Zhou, Zeqiang Wang, Dading Chong, Zhijiang Guo, Yining Hua, Zichang Su, Zhiyang Teng, Jiageng Wu, Jie Yang

To further investigate tweet users' attitudes toward specific entities, 4 types of entities (Person, Organization, Drug, and Vaccine) are selected and annotated with user sentiments, resulting in a targeted sentiment dataset with 9, 101 entities (in 5, 278 tweets).

Epidemiology named-entity-recognition +3

Scene Graph Modification as Incremental Structure Expanding

no code implementations COLING 2022 Xuming Hu, Zhijiang Guo, Yu Fu, Lijie Wen, Philip S. Yu

A scene graph is a semantic representation that expresses the objects, attributes, and relationships between objects in a scene.

A Survey on Automated Fact-Checking

1 code implementation26 Aug 2021 Zhijiang Guo, Michael Schlichtkrull, Andreas Vlachos

Fact-checking has become increasingly important due to the speed with which both information and misinformation can spread in the modern media ecosystem.

Fact Checking Misinformation

FEVEROUS: Fact Extraction and VERification Over Unstructured and Structured information

1 code implementation10 Jun 2021 Rami Aly, Zhijiang Guo, Michael Schlichtkrull, James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Oana Cocarascu, Arpit Mittal

Fact verification has attracted a lot of attention in the machine learning and natural language processing communities, as it is one of the key methods for detecting misinformation.

Fact Verification Misinformation

Reasoning with Latent Structure Refinement for Document-Level Relation Extraction

2 code implementations ACL 2020 Guoshun Nan, Zhijiang Guo, Ivan Sekulić, Wei Lu

Document-level relation extraction requires integrating information within and across multiple sentences of a document and capturing complex interactions between inter-sentence entities.

Document-level Relation Extraction Relation +2

Densely Connected Graph Convolutional Networks for Graph-to-Sequence Learning

1 code implementation TACL 2019 Zhijiang Guo, Yan Zhang, Zhiyang Teng, Wei Lu

We focus on graph-to-sequence learning, which can be framed as transducing graph structures to sequences for text generation.

Graph-to-Sequence Machine Translation +2

Attention Guided Graph Convolutional Networks for Relation Extraction

2 code implementations ACL 2019 Zhijiang Guo, Yan Zhang, Wei Lu

Dependency trees convey rich structural information that is proven useful for extracting relations among entities in text.

Relation Relation Extraction +1

Better Transition-Based AMR Parsing with a Refined Search Space

no code implementations EMNLP 2018 Zhijiang Guo, Wei Lu

This paper introduces a simple yet effective transition-based system for Abstract Meaning Representation (AMR) parsing.

AMR Parsing Named Entity Recognition (NER) +1

Cannot find the paper you are looking for? You can Submit a new open access paper.