Network Pruning

214 papers with code • 5 benchmarks • 5 datasets

Network Pruning is a popular approach to reduce a heavy network to obtain a light-weight form by removing redundancy in the heavy network. In this approach, a complex over-parameterized network is first trained, then pruned based on come criterions, and finally fine-tuned to achieve comparable performance with reduced parameters.

Source: Ensemble Knowledge Distillation for Learning Improved and Efficient Networks

Libraries

Use these libraries to find Network Pruning models and implementations

Aggressive or Imperceptible, or Both: Network Pruning Assisted Hybrid Byzantines in Federated Learning

CRYPTO-KU/FL-Byzantine-Library 9 Apr 2024

Hence, inspired by the sparse neural networks, we introduce a hybrid sparse Byzantine attack that is composed of two parts: one exhibiting a sparse nature and attacking only certain NN locations with higher sensitivity, and the other being more silent but accumulating over time, where each ideally targets a different type of defence mechanism, and together they form a strong but imperceptible attack.

0
09 Apr 2024

Auto-Train-Once: Controller Network Guided Automatic Network Pruning from Scratch

xidongwu/autotrainonce 21 Mar 2024

Current techniques for deep neural network (DNN) pruning often involve intricate multi-step processes that require domain-specific expertise, making their widespread adoption challenging.

3
21 Mar 2024

Adversarial Fine-tuning of Compressed Neural Networks for Joint Improvement of Robustness and Efficiency

saintslab/adver-fine 14 Mar 2024

We present experiments on two benchmark datasets showing that adversarial fine-tuning of compressed models can achieve robustness performance comparable to adversarially trained models, while also improving computational efficiency.

0
14 Mar 2024

FALCON: FLOP-Aware Combinatorial Optimization for Neural Network Pruning

mazumder-lab/falcon 11 Mar 2024

In this paper, we propose FALCON, a novel combinatorial-optimization-based framework for network pruning that jointly takes into account model accuracy (fidelity), FLOPs, and sparsity constraints.

1
11 Mar 2024

What to Do When Your Discrete Optimization Is the Size of a Neural Network?

hsilva664/discrete_nn 15 Feb 2024

Oftentimes, machine learning applications using neural networks involve solving discrete optimization problems, such as in pruning, parameter-isolation-based continual learning and training of binary networks.

0
15 Feb 2024

Less is KEN: a Universal and Simple Non-Parametric Pruning Algorithm for Large Language Models

itsmattei/ken 5 Feb 2024

This approach maintains model performance while allowing storage of only the optimized subnetwork, leading to significant memory savings.

0
05 Feb 2024

Fluctuation-based Adaptive Structured Pruning for Large Language Models

casia-iva-lab/flap 19 Dec 2023

Retraining-free is important for LLMs' pruning methods.

24
19 Dec 2023

Towards Higher Ranks via Adversarial Weight Pruning

huawei-noah/Efficient-Computing NeurIPS 2023

To this end, we propose a Rank-based PruninG (RPG) method to maintain the ranks of sparse weights in an adversarial manner.

1,116
29 Nov 2023

LightGaussian: Unbounded 3D Gaussian Compression with 15x Reduction and 200+ FPS

VITA-Group/LightGaussian 28 Nov 2023

Recent advancements in real-time neural rendering using point-based techniques have paved the way for the widespread adoption of 3D representations.

456
28 Nov 2023

Filter-Pruning of Lightweight Face Detectors Using a Geometric Median Criterion

idt-iti/lightweight-face-detector-pruning 28 Nov 2023

Face detectors are becoming a crucial component of many applications, including surveillance, that often have to run on edge devices with limited processing power and memory.

9
28 Nov 2023