Search Results for author: Pengfei Yu

Found 23 papers, 7 papers with code

COVID-19 Claim Radar: A Structured Claim Extraction and Tracking System

1 code implementation ACL 2022 Manling Li, Revanth Gangi Reddy, Ziqi Wang, Yi-shyuan Chiang, Tuan Lai, Pengfei Yu, Zixuan Zhang, Heng Ji

To tackle the challenge of accurate and timely communication regarding the COVID-19 pandemic, we present a COVID-19 Claim Radar to automatically extract supporting and refuting claims on a daily basis.

Lifelong Event Detection with Knowledge Transfer

1 code implementation EMNLP 2021 Pengfei Yu, Heng Ji, Prem Natarajan

We focus on lifelong event detection as an exemplar case and propose a new problem formulation that is also generalizable to other IE tasks.

Event Detection Lifelong learning +1

Do Language Models Have Bayesian Brains? Distinguishing Stochastic and Deterministic Decision Patterns within Large Language Models

no code implementations12 Jun 2025 Andrea Yaoyun Cui, Pengfei Yu

Building on this assumption, prior research has used simulated Gibbs sampling, inspired by experiments designed to elicit human priors, to infer the priors of language models.

Decision Making

Ordered-subsets Multi-diffusion Model for Sparse-view CT Reconstruction

no code implementations15 May 2025 Pengfei Yu, Bin Huang, Minghui Zhang, Weiwen Wu, Shaoyu Wang, Qiegen Liu

Experimental results demonstrate that OSMM outperforms traditional diffusion models in terms of image quality and noise resilience, offering a powerful and versatile solution for advanced CT imaging in sparse-view scenarios.

CT Reconstruction

LDGen: Enhancing Text-to-Image Synthesis via Large Language Model-Driven Language Representation

no code implementations25 Feb 2025 Pengzhi Li, Pengfei Yu, Zide Liu, wei he, Xuhao Pan, Xudong Rao, Tao Wei, Wei Chen

In this paper, we introduce LDGen, a novel method for integrating large language models (LLMs) into existing text-to-image diffusion models while minimizing computational demands.

Image Generation Language Modeling +2

The Law of Knowledge Overshadowing: Towards Understanding, Predicting, and Preventing LLM Hallucination

no code implementations22 Feb 2025 Yuji Zhang, Sha Li, Cheng Qian, Jiateng Liu, Pengfei Yu, Chi Han, Yi R. Fung, Kathleen McKeown, ChengXiang Zhai, Manling Li, Heng Ji

To address it, we propose a novel concept: knowledge overshadowing, where model's dominant knowledge can obscure less prominent knowledge during text generation, causing the model to fabricate inaccurate details.

Hallucination Text Generation

Gene-Metabolite Association Prediction with Interactive Knowledge Transfer Enhanced Graph for Metabolite Production

no code implementations24 Oct 2024 Kexuan Xin, Qingyun Wang, Junyu Chen, Pengfei Yu, Huimin Zhao, Heng Ji

In the rapidly evolving field of metabolic engineering, the quest for efficient and precise gene target identification for metabolite production enhancement presents significant challenges.

Link Prediction Prediction +1

Knowledge Overshadowing Causes Amalgamated Hallucination in Large Language Models

no code implementations10 Jul 2024 Yuji Zhang, Sha Li, Jiateng Liu, Pengfei Yu, Yi R. Fung, Jing Li, Manling Li, Heng Ji

This phenomenon partially stems from training data imbalance, which we verify on both pretrained models and fine-tuned models, over a wide range of LM model families and sizes. From a theoretical point of view, knowledge overshadowing can be interpreted as over-generalization of the dominant conditions (patterns).

Hallucination Language Modeling +1

Why Does New Knowledge Create Messy Ripple Effects in LLMs?

no code implementations2 Jul 2024 Jiaxin Qin, Zixuan Zhang, Chi Han, Manling Li, Pengfei Yu, Heng Ji

Extensive previous research has focused on post-training knowledge editing (KE) for language models (LMs) to ensure that knowledge remains accurate and up-to-date.

knowledge editing Negation

EVEDIT: Event-based Knowledge Editing with Deductive Editing Boundaries

no code implementations17 Feb 2024 Jiateng Liu, Pengfei Yu, Yuji Zhang, Sha Li, Zixuan Zhang, Heng Ji

The dynamic nature of real-world information necessitates efficient knowledge editing (KE) in large language models (LLMs) for knowledge updating.

knowledge editing

MuChin: A Chinese Colloquial Description Benchmark for Evaluating Language Models in the Field of Music

1 code implementation15 Feb 2024 ZiHao Wang, Shuyu Li, Tao Zhang, Qi Wang, Pengfei Yu, Jinyang Luo, Yan Liu, Ming Xi, Kejun Zhang

To this end, we present MuChin, the first open-source music description benchmark in Chinese colloquial language, designed to evaluate the performance of multimodal LLMs in understanding and describing music.

Information Retrieval Music Information Retrieval

Defining a New NLP Playground

no code implementations31 Oct 2023 Sha Li, Chi Han, Pengfei Yu, Carl Edwards, Manling Li, Xingyao Wang, Yi R. Fung, Charles Yu, Joel R. Tetreault, Eduard H. Hovy, Heng Ji

The recent explosion of performance of large language models (LLMs) has changed the field of Natural Language Processing (NLP) more abruptly and seismically than any other shift in the field's 80-year history.

Information Association for Language Model Updating by Mitigating LM-Logical Discrepancy

no code implementations29 May 2023 Pengfei Yu, Heng Ji

To evaluate and address the core challenge, we propose a new task formulation of the information updating task that only requires the provision of an unstructured updating corpus and evaluates the performance of information updating on the generalizability to question-answer pairs pertaining to the updating information.

Answer Generation Articles +5

SongDriver: Real-time Music Accompaniment Generation without Logical Latency nor Exposure Bias

no code implementations13 Sep 2022 ZiHao Wang, Qihao Liang, Kejun Zhang, Yuxing Wang, Chen Zhang, Pengfei Yu, Yongsheng Feng, Wenbo Liu, Yikai Wang, Yuntai Bao, Yiheng Yang

In this paper, we propose SongDriver, a real-time music accompaniment generation system without logical latency nor exposure bias.

MUSE: Textual Attributes Guided Portrait Painting Generation

1 code implementation9 Nov 2020 Xiaodan Hu, Pengfei Yu, Kevin Knight, Heng Ji, Bo Li, Honghui Shi

Experiments show that our approach can accurately illustrate 78% textual attributes, which also help MUSE capture the subject in a more creative and expressive way.

Attribute

AuxBlocks: Defense Adversarial Example via Auxiliary Blocks

no code implementations18 Feb 2019 Yueyao Yu, Pengfei Yu, Wenye Li

Deep learning models are vulnerable to adversarial examples, which poses an indisputable threat to their applications.

Hierarchical Relation Extraction with Coarse-to-Fine Grained Attention

1 code implementation EMNLP 2018 Xu Han, Pengfei Yu, Zhiyuan Liu, Maosong Sun, Peng Li

In this paper, we aim to incorporate the hierarchical information of relations for distantly supervised relation extraction and propose a novel hierarchical attention scheme.

Knowledge Graphs Relation +2

Tracking of enriched dialog states for flexible conversational information access

no code implementations9 Nov 2017 Yinpei Dai, Zhijian Ou, Dawei Ren, Pengfei Yu

The above observations motivate us to enrich current representation of dialog states and collect a brand new dialog dataset about movies, based upon which we build a new DST, called enriched DST (EDST), for flexible accessing movie information.

Conversational Information Access dialog state tracking +2

Cannot find the paper you are looking for? You can Submit a new open access paper.