Search Results for author: Xinya Du

Found 29 papers, 23 papers with code

FGAIF: Aligning Large Vision-Language Models with Fine-grained AI Feedback

1 code implementation7 Apr 2024 Liqiang Jing, Xinya Du

To handle these limitations, we propose an innovative method to align modalities in LVLMs through Fine-Grained Artificial Intelligence Feedback (FGAIF), which mainly consists of three steps: AI-based Feedback Collection, Fine-grained Reward Model Training, and Reinforcement Learning with Fine-grained Reward.

Attribute Hallucination +1

Leveraging Structured Information for Explainable Multi-hop Question Answering and Reasoning

1 code implementation7 Nov 2023 Ruosen Li, Xinya Du

Neural models, including large language models (LLMs), achieve superior performance on multi-hop question-answering.

Multi-hop Question Answering Question Answering

FAITHSCORE: Evaluating Hallucinations in Large Vision-Language Models

1 code implementation2 Nov 2023 Liqiang Jing, Ruosen Li, Yunmo Chen, Mengzhao Jia, Xinya Du

We introduce FAITHSCORE (Faithfulness to Atomic Image Facts Score), a reference-free and fine-grained evaluation metric that measures the faithfulness of the generated free-form answers from large vision-language models (LVLMs).

Descriptive Instruction Following

POE: Process of Elimination for Multiple Choice Reasoning

1 code implementation24 Oct 2023 Chenkai Ma, Xinya Du

Language models (LMs) are capable of conducting in-context learning for multiple choice reasoning tasks, but the options in these tasks are treated equally.

In-Context Learning Logical Reasoning +1

Probing Representations for Document-level Event Extraction

1 code implementation23 Oct 2023 Barry Wang, Xinya Du, Claire Cardie

This work is the first to apply the probing paradigm to representations learned for document-level information extraction (IE).

Document-level Event Extraction Event Extraction +2

AGent: A Novel Pipeline for Automatically Creating Unanswerable Questions

1 code implementation10 Sep 2023 Son Quoc Tran, Gia-Huy Do, Phong Nguyen-Thuan Do, Matt Kretchmar, Xinya Du

In this paper, we demonstrate the usefulness of this AGent pipeline by creating two sets of unanswerable questions from answerable questions in SQuAD and HotpotQA.

Extractive Question-Answering Question Answering +1

Large Language Models for Automated Open-domain Scientific Hypotheses Discovery

1 code implementation6 Sep 2023 Zonglin Yang, Xinya Du, Junxian Li, Jie Zheng, Soujanya Poria, Erik Cambria

Hypothetical induction is recognized as the main reasoning type when scientists make observations about the world and try to propose hypotheses to explain those observations.

valid

PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations

no code implementations6 Jul 2023 Ruosen Li, Teerth Patel, Xinya Du

Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers.

Language Modelling Large Language Model +1

Logical Entity Representation in Knowledge-Graphs for Differentiable Rule Learning

1 code implementation22 May 2023 Chi Han, Qizheng He, Charles Yu, Xinya Du, Hanghang Tong, Heng Ji

A LERP is designed as a vector of probabilistic logical functions on the entity's neighboring sub-graph.

Link Prediction

Logical Reasoning over Natural Language as Knowledge Representation: A Survey

1 code implementation21 Mar 2023 Zonglin Yang, Xinya Du, Rui Mao, Jinjie Ni, Erik Cambria

This paper provides a comprehensive overview on a new paradigm of logical reasoning, which uses natural language as knowledge representation and pretrained language models as reasoners, including philosophical definition and categorization of logical reasoning, advantages of the new paradigm, benchmarks and methods, challenges of the new paradigm, possible future directions, and relation to related NLP fields.

Logical Reasoning

Language Models as Inductive Reasoners

1 code implementation21 Dec 2022 Zonglin Yang, Li Dong, Xinya Du, Hao Cheng, Erik Cambria, Xiaodong Liu, Jianfeng Gao, Furu Wei

To this end, we propose a new paradigm (task) for inductive reasoning, which is to induce natural language rules from natural language facts, and create a dataset termed DEER containing 1. 2k rule-fact pairs for the task, where rules and facts are written in natural language.

Philosophy

Zero-Shot Classification by Logical Reasoning on Natural Language Explanations

1 code implementation7 Nov 2022 Chi Han, Hengzhi Pei, Xinya Du, Heng Ji

To this end, we propose the framework CLORE (Classification by LOgical Reasoning on Explanations).

Classification Logical Reasoning +1

Dynamic Global Memory for Document-level Argument Extraction

1 code implementation ACL 2022 Xinya Du, Sha Li, Heng Ji

Extracting informative arguments of events from news articles is a challenging problem in information extraction, which requires a global contextual understanding of each document.

Event Argument Extraction Sentence

Automatic Error Analysis for Document-level Information Extraction

1 code implementation ACL 2022 Aliva Das, Xinya Du, Barry Wang, Kejian Shi, Jiayuan Gu, Thomas Porter, Claire Cardie

Document-level information extraction (IE) tasks have recently begun to be revisited in earnest using the end-to-end neural network techniques that have been successful on their sentence-level IE counterparts.

Event Extraction Relation Extraction +1

QA-Driven Zero-shot Slot Filling with Weak Supervision Pretraining

no code implementations ACL 2021 Xinya Du, Luheng He, Qi Li, Dian Yu, Panupong Pasupat, Yuan Zhang

To address this problem, we introduce QA-driven slot filling (QASF), which extracts slot-filler spans from utterances with a span-based QA model.

slot-filling Zero-shot Slot Filling

Template Filling with Generative Transformers

1 code implementation NAACL 2021 Xinya Du, Alexander Rush, Claire Cardie

Template filling is generally tackled by a pipeline of two separate supervised systems {--} one for role-filler extraction and another for template/event recognition.

Few-shot Intent Classification and Slot Filling with Retrieved Examples

no code implementations NAACL 2021 Dian Yu, Luheng He, Yuan Zhang, Xinya Du, Panupong Pasupat, Qi Li

Few-shot learning arises in important practical scenarios, such as when a natural language understanding system needs to learn new semantic labels for an emerging, resource-scarce domain.

Classification Few-Shot Learning +8

Event Extraction by Answering (Almost) Natural Questions

3 code implementations EMNLP 2020 Xinya Du, Claire Cardie

The problem of event extraction requires detecting the event trigger and extracting its corresponding arguments.

Event Argument Extraction Event Extraction +3

Be Consistent! Improving Procedural Text Comprehension using Label Consistency

1 code implementation NAACL 2019 Xinya Du, Bhavana Dalvi Mishra, Niket Tandon, Antoine Bosselut, Wen-tau Yih, Peter Clark, Claire Cardie

Our goal is procedural text comprehension, namely tracking how the properties of entities (e. g., their location) change with time given a procedural text (e. g., a paragraph about photosynthesis, a recipe).

Reading Comprehension

Harvesting Paragraph-Level Question-Answer Pairs from Wikipedia

1 code implementation ACL 2018 Xinya Du, Claire Cardie

We study the task of generating from Wikipedia articles question-answer pairs that cover content beyond a single sentence.

Question Generation Question-Generation +1

Identifying Where to Focus in Reading Comprehension for Neural Question Generation

no code implementations EMNLP 2017 Xinya Du, Claire Cardie

A first step in the task of automatically generating questions for testing reading comprehension is to identify \textit{question-worthy} sentences, i. e. sentences in a text passage that humans find it worthwhile to ask questions about.

Dependency Parsing Machine Translation +8

Cannot find the paper you are looking for? You can Submit a new open access paper.