Sparse Learning

43 papers with code • 3 benchmarks • 3 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Sparse Learning models and implementations

Most implemented papers

Variational Dropout Sparsifies Deep Neural Networks

ars-ashuha/variational-dropout-sparsifies-dnn ICML 2017

We explore a recently proposed Variational Dropout technique that provided an elegant Bayesian interpretation to Gaussian Dropout.

Rigging the Lottery: Making All Tickets Winners

google-research/rigl ICML 2020

There is a large body of work on training dense networks to yield sparse networks for inference, but this limits the size of the largest trainable sparse model to that of the largest trainable dense model.

The State of Sparsity in Deep Neural Networks

ars-ashuha/variational-dropout-sparsifies-dnn 25 Feb 2019

We rigorously evaluate three state-of-the-art techniques for inducing sparsity in deep neural networks on two large-scale learning tasks: Transformer trained on WMT 2014 English-to-German, and ResNet-50 trained on ImageNet.

A General Iterative Shrinkage and Thresholding Algorithm for Non-convex Regularized Optimization Problems

iancovert/Neural-GC 18 Mar 2013

A commonly used approach is the Multi-Stage (MS) convex relaxation (or DC programming), which relaxes the original non-convex problem to a sequence of convex problems.

Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training

Shiweiliuiiiiiii/In-Time-Over-Parameterization 4 Feb 2021

By starting from a random sparse network and continuously exploring sparse connectivities during training, we can perform an Over-Parameterization in the space-time manifold, closing the gap in the expressibility between sparse training and dense training.

Feature Selection: A Data Perspective

jundongl/scikit-feature 29 Jan 2016

To facilitate and promote the research in this community, we also present an open-source feature selection repository that consists of most of the popular feature selection algorithms (\url{http://featureselection. asu. edu/}).

Sparse Networks from Scratch: Faster Training without Losing Performance

TimDettmers/sparse_learning ICLR 2020

We demonstrate the possibility of what we call sparse learning: accelerated training of deep neural networks that maintain sparse weights throughout training while achieving dense performance levels.

Sparse Regression at Scale: Branch-and-Bound rooted in First-Order Optimization

alisaab/l0bnb 13 Apr 2020

In this work, we present a new exact MIP framework for $\ell_0\ell_2$-regularized regression that can scale to $p \sim 10^7$, achieving speedups of at least $5000$x, compared to state-of-the-art exact methods.

Sparse Training via Boosting Pruning Plasticity with Neuroregeneration

Shiweiliuiiiiiii/GraNet NeurIPS 2021

Works on lottery ticket hypothesis (LTH) and single-shot network pruning (SNIP) have raised a lot of attention currently on post-training pruning (iterative magnitude pruning), and before-training pruning (pruning at initialization).

abess: A Fast Best Subset Selection Library in Python and R

abess-team/abess-A-Fast-Best-Subset-Selection-Library-in-Python-and-R 19 Oct 2021

In addition, a user-friendly R library is available at the Comprehensive R Archive Network.