Search Results for author: Xinyu Pi

Found 5 papers, 3 papers with code

Beyond LLMs: A Linguistic Approach to Causal Graph Generation from Narrative Texts

no code implementations10 Apr 2025 Zehan Li, Ruhua Pan, Xinyu Pi

We propose a novel framework for generating causal graphs from narrative texts, bridging high-level causality and detailed event-specific relationships.

Graph Generation Language Modeling +2

UOUO: Uncontextualized Uncommon Objects for Measuring Knowledge Horizons of Vision Language Models

no code implementations25 Jul 2024 Xinyu Pi, Mingyuan Wu, Jize Jiang, Haozhen Zheng, Beitong Tian, ChengXiang Zhai, Klara Nahrstedt, Zhiting Hu

Smaller-scale Vision-Langauge Models (VLMs) often claim to perform on par with larger models in general-domain visual grounding and question-answering benchmarks while offering advantages in computational efficiency and storage.

Computational Efficiency Question Answering +1

Towards Robustness of Text-to-SQL Models Against Natural and Realistic Adversarial Table Perturbation

1 code implementation ACL 2022 Xinyu Pi, Bing Wang, Yan Gao, Jiaqi Guo, Zhoujun Li, Jian-Guang Lou

The robustness of Text-to-SQL parsers against adversarial perturbations plays a crucial role in delivering highly reliable applications.

Text-To-SQL

LogiGAN: Learning Logical Reasoning via Adversarial Pre-training

1 code implementation18 May 2022 Xinyu Pi, Wanjun Zhong, Yan Gao, Nan Duan, Jian-Guang Lou

We present LogiGAN, an unsupervised adversarial pre-training framework for improving logical reasoning abilities of language models.

Logical Reasoning Sentence

Reasoning Like Program Executors

1 code implementation27 Jan 2022 Xinyu Pi, Qian Liu, Bei Chen, Morteza Ziyadi, Zeqi Lin, Qiang Fu, Yan Gao, Jian-Guang Lou, Weizhu Chen

Reasoning over natural language is a long-standing goal for the research community.

Ranked #2 on Question Answering on DROP Test (using extra training data)

Logical Reasoning Math +1

Cannot find the paper you are looking for? You can Submit a new open access paper.