Search Results for author: Junfeng Fang

Found 18 papers, 10 papers with code

LLM-Virus: Evolutionary Jailbreak Attack on Large Language Models

1 code implementation28 Dec 2024 Miao Yu, Junfeng Fang, Yingjie Zhou, Xing Fan, Kun Wang, Shirui Pan, Qingsong Wen

While safety-aligned large language models (LLMs) are increasingly used as the cornerstone for powerful systems such as multi-agent frameworks to solve complex real-world problems, they still suffer from potential adversarial queries, such as jailbreak attacks, which attempt to induce harmful content.

Transfer Learning

Cracking the Code of Hallucination in LVLMs with Vision-aware Head Divergence

no code implementations18 Dec 2024 Jinghan He, Kuan Zhu, Haiyun Guo, Junfeng Fang, Zhenglin Hua, Yuheng Jia, Ming Tang, Tat-Seng Chua, Jinqiao Wang

Large vision-language models (LVLMs) have made substantial progress in integrating large language models (LLMs) with visual inputs, enabling advanced multimodal reasoning.

Hallucination Multimodal Reasoning

Context-DPO: Aligning Language Models for Context-Faithfulness

1 code implementation18 Dec 2024 Baolong Bi, Shaohan Huang, Yiwei Wang, Tianchi Yang, Zihan Zhang, Haizhen Huang, Lingrui Mei, Junfeng Fang, Zehao Li, Furu Wei, Weiwei Deng, Feng Sun, Qi Zhang, Shenghua Liu

Reliable responses from large language models (LLMs) require adherence to user instructions and retrieved information.

RAG

On the Role of Attention Heads in Large Language Model Safety

1 code implementation17 Oct 2024 Zhenhong Zhou, Haiyang Yu, Xinghua Zhang, Rongwu Xu, Fei Huang, Kun Wang, Yang Liu, Junfeng Fang, Yongbin Li

In light of this, recent research on safety mechanisms has emerged, revealing that when safety representations or component are suppressed, the safety capability of LLMs are compromised.

Attribute Language Modeling +2

G-Designer: Architecting Multi-agent Communication Topologies via Graph Neural Networks

no code implementations15 Oct 2024 Guibin Zhang, Yanwei Yue, Xiangguo Sun, Guancheng Wan, Miao Yu, Junfeng Fang, Kun Wang, Dawei Cheng

Recent advancements in large language model (LLM)-based agents have demonstrated that collective intelligence can significantly surpass the capabilities of individual agents, primarily due to well-crafted inter-agent communication topologies.

HumanEval Language Modelling +2

DiffGAD: A Diffusion-based Unsupervised Graph Anomaly Detector

1 code implementation9 Oct 2024 Jinghan Li, Yuan Gao, Jinda Lu, Junfeng Fang, Congcong Wen, Hui Lin, Xiang Wang

Graph Anomaly Detection (GAD) is crucial for identifying abnormal entities within networks, garnering significant attention across various fields.

Graph Anomaly Detection

Neuron-Level Sequential Editing for Large Language Models

2 code implementations5 Oct 2024 Houcheng Jiang, Junfeng Fang, Tianyu Zhang, An Zhang, Ruipeng Wang, Tao Liang, Xiang Wang

This work explores sequential model editing in large language models (LLMs), a critical task that involves modifying internal knowledge within LLMs continuously through multi-round editing, each incorporating updates or corrections to adjust the model outputs without the need for costly retraining.

Model Editing

Text-guided Diffusion Model for 3D Molecule Generation

no code implementations4 Oct 2024 Yanchen Luo, Junfeng Fang, Sihang Li, Zhiyuan Liu, Jiancan Wu, An Zhang, Wenjie Du, Xiang Wang

The de novo generation of molecules with targeted properties is crucial in biology, chemistry, and drug discovery.

3D Molecule Generation Diversity +1

AlphaEdit: Null-Space Constrained Knowledge Editing for Language Models

2 code implementations3 Oct 2024 Junfeng Fang, Houcheng Jiang, Kun Wang, Yunshan Ma, Xiang Wang, Xiangnan He, Tat-Seng Chua

To address this, we introduce AlphaEdit, a novel solution that projects perturbation onto the null space of the preserved knowledge before applying it to the parameters.

knowledge editing

Mind Scramble: Unveiling Large Language Model Psychology Via Typoglycemia

1 code implementation2 Oct 2024 Miao Yu, Junyuan Mao, Guibin Zhang, Jingheng Ye, Junfeng Fang, Aoxiao Zhong, Yang Liu, Yuxuan Liang, Kun Wang, Qingsong Wen

Research into the external behaviors and internal mechanisms of large language models (LLMs) has shown promise in addressing complex tasks in the physical world.

Language Modeling Language Modelling +2

StruEdit: Structured Outputs Enable the Fast and Accurate Knowledge Editing for Large Language Models

no code implementations16 Sep 2024 Baolong Bi, Shenghua Liu, Yiwei Wang, Lingrui Mei, Hongcheng Gao, Junfeng Fang, Xueqi Cheng

To achieve such ideal question-answering systems, locating and then editing outdated knowledge in the natural language outputs is a general target of popular knowledge editing methods.

knowledge editing Question Answering

The Heterophilic Snowflake Hypothesis: Training and Empowering GNNs for Heterophilic Graphs

1 code implementation18 Jun 2024 Kun Wang, Guibin Zhang, Xinnan Zhang, Junfeng Fang, Xun Wu, Guohao Li, Shirui Pan, Wei Huang, Yuxuan Liang

Based on observations, we innovatively introduce the Heterophily Snowflake Hypothesis and provide an effective solution to guide and facilitate research on heterophilic graphs and beyond.

Node Classification

MolTC: Towards Molecular Relational Modeling In Language Models

1 code implementation6 Feb 2024 Junfeng Fang, Shuai Zhang, Chang Wu, Zhengyi Yang, Zhiyuan Liu, Sihang Li, Kun Wang, Wenjie Du, Xiang Wang

Molecular Relational Learning (MRL), aiming to understand interactions between molecular pairs, plays a pivotal role in advancing biochemical research.

Relational Reasoning

Modeling Spatio-temporal Dynamical Systems with Neural Discrete Learning and Levels-of-Experts

no code implementations6 Feb 2024 Kun Wang, Hao Wu, Guibin Zhang, Junfeng Fang, Yuxuan Liang, Yuankai Wu, Roger Zimmermann, Yang Wang

In this paper, we address the issue of modeling and estimating changes in the state of the spatio-temporal dynamical systems based on a sequence of observations like video frames.

Optical Flow Estimation

Two Heads Are Better Than One: Boosting Graph Sparse Training via Semantic and Topological Awareness

no code implementations2 Feb 2024 Guibin Zhang, Yanwei Yue, Kun Wang, Junfeng Fang, Yongduo Sui, Kai Wang, Yuxuan Liang, Dawei Cheng, Shirui Pan, Tianlong Chen

Specifically, GST initially constructs a topology & semantic anchor at a low training cost, followed by performing dynamic sparse training to align the sparse graph with the anchor.

Adversarial Defense Graph Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.