Search Results for author: Lirui Zhao

Found 3 papers, 3 papers with code

Boosting the Cross-Architecture Generalization of Dataset Distillation through an Empirical Study

1 code implementation9 Dec 2023 Lirui Zhao, Yuxin Zhang, Mingbao Lin, Fei Chao, Rongrong Ji

The poor cross-architecture generalization of dataset distillation greatly weakens its practical significance.

Inductive Bias

Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs

1 code implementation13 Oct 2023 Yuxin Zhang, Lirui Zhao, Mingbao Lin, Yunyun Sun, Yiwu Yao, Xingjia Han, Jared Tanner, Shiwei Liu, Rongrong Ji

Inspired by the Dynamic Sparse Training, DSnoT minimizes the reconstruction error between the dense and sparse LLMs, in the fashion of performing iterative weight pruning-and-growing on top of sparse LLMs.

Network Pruning

OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models

2 code implementations25 Aug 2023 Wenqi Shao, Mengzhao Chen, Zhaoyang Zhang, Peng Xu, Lirui Zhao, Zhiqian Li, Kaipeng Zhang, Peng Gao, Yu Qiao, Ping Luo

To tackle this issue, we introduce an Omnidirectionally calibrated Quantization (OmniQuant) technique for LLMs, which achieves good performance in diverse quantization settings while maintaining the computational efficiency of PTQ by efficiently optimizing various quantization parameters.

Common Sense Reasoning Computational Efficiency +3

Cannot find the paper you are looking for? You can Submit a new open access paper.