no code implementations • 4 Feb 2025 • Jianze Li, JieZhang Cao, Yong Guo, Wenbo Li, Yulun Zhang
We use the state-of-the-art diffusion model FLUX. 1-dev as both the teacher model and the base model.
1 code implementation • 26 Nov 2024 • Libo Zhu, Jianze Li, Haotong Qin, Wenbo Li, Yulun Zhang, Yong Guo, Xiaokang Yang
Diffusion-based image super-resolution (SR) models have shown superior performance at the cost of multiple denoising steps.
1 code implementation • 5 Oct 2024 • Jianze Li, JieZhang Cao, Zichen Zou, Xiongfei Su, Xin Yuan, Yulun Zhang, Yong Guo, Xiaokang Yang
However, these methods incur substantial training costs and may constrain the performance of the student model by the teacher's limitations.
no code implementations • 28 Aug 2024 • Weilin Lin, Li Liu, Jianze Li, Hui Xiong
This method, based on our findings on neuron weight changes (NWCs) of random unlearning, uses optimal transport (OT)-based model fusion to combine the advantages of both pruned and backdoored models.
no code implementations • 30 May 2024 • Weilin Lin, Li Liu, Shaokui Wei, Jianze Li, Hui Xiong
Recently, without poisoned data, unlearning models with clean data and then learning a pruning mask have contributed to backdoor defense.
no code implementations • 25 Nov 2022 • Taoyong Cui, Jianze Li, Yuhan Dong, Li Liu
In the first stage, we propose a novel algorithm called polar decomposition-based orthogonal initialization (PDOI) to find a good initialization for the orthogonal optimization.
1 code implementation • 5 Aug 2022 • Runkai Zheng, Rongjun Tang, Jianze Li, Li Liu
Pruning these channels was then shown to be effective in mitigating the backdoor behaviors.
no code implementations • 20 Feb 2019 • Jianze Li, Xiao-Ping Zhang, Tuan Tran
In this paper, we propose a new algorithm for point cloud denoising based on the tensor Tucker decomposition.