Search Results for author: Houcheng Jiang

Found 3 papers, 2 papers with code

Accelerating Diffusion Transformer via Error-Optimized Cache

no code implementations31 Jan 2025 Junxiang Qiu, Shuo Wang, Jinda Lu, Lin Liu, Houcheng Jiang, Yanbin Hao

Existing caching methods accelerate generation by reusing DiT features from the previous time step and skipping calculations in the next, but they tend to locate and cache low-error modules without focusing on reducing caching-induced errors, resulting in a sharp decline in generated content quality when increasing caching intensity.

Neuron-Level Sequential Editing for Large Language Models

2 code implementations5 Oct 2024 Houcheng Jiang, Junfeng Fang, Tianyu Zhang, An Zhang, Ruipeng Wang, Tao Liang, Xiang Wang

This work explores sequential model editing in large language models (LLMs), a critical task that involves modifying internal knowledge within LLMs continuously through multi-round editing, each incorporating updates or corrections to adjust the model outputs without the need for costly retraining.

Model Editing

AlphaEdit: Null-Space Constrained Knowledge Editing for Language Models

2 code implementations3 Oct 2024 Junfeng Fang, Houcheng Jiang, Kun Wang, Yunshan Ma, Xiang Wang, Xiangnan He, Tat-Seng Chua

To address this, we introduce AlphaEdit, a novel solution that projects perturbation onto the null space of the preserved knowledge before applying it to the parameters.

knowledge editing

Cannot find the paper you are looking for? You can Submit a new open access paper.