Browse > Methodology > Sparse Learning

Sparse Learning

11 papers with code · Methodology

Leaderboards

Greatest papers with code

The State of Sparsity in Deep Neural Networks

25 Feb 2019google-research/google-research

We rigorously evaluate three state-of-the-art techniques for inducing sparsity in deep neural networks on two large-scale learning tasks: Transformer trained on WMT 2014 English-to-German, and ResNet-50 trained on ImageNet.

MODEL COMPRESSION SPARSE LEARNING

Feature Selection: A Data Perspective

29 Jan 2016jundongl/scikit-feature

To facilitate and promote the research in this community, we also present an open-source feature selection repository that consists of most of the popular feature selection algorithms (\url{http://featureselection. asu. edu/}).

FEATURE SELECTION SPARSE LEARNING

Variational Dropout Sparsifies Deep Neural Networks

ICML 2017 ars-ashuha/variational-dropout-sparsifies-dnn

We explore a recently proposed Variational Dropout technique that provided an elegant Bayesian interpretation to Gaussian Dropout.

SPARSE LEARNING

Sparse Networks from Scratch: Faster Training without Losing Performance

ICLR 2020 TimDettmers/sparse_learning

We demonstrate the possibility of what we call sparse learning: accelerated training of deep neural networks that maintain sparse weights throughout training while achieving dense performance levels.

IMAGE CLASSIFICATION SPARSE LEARNING

Rigging the Lottery: Making All Tickets Winners

ICLR 2020 google-research/rigl

There is a large body of work on training dense networks to yield sparse networks for inference.

IMAGE CLASSIFICATION LANGUAGE MODELLING SPARSE LEARNING

Fast Best Subset Selection: Coordinate Descent and Local Combinatorial Optimization Algorithms

5 Mar 2018hazimehh/L0Learn

In spite of the usefulness of $L_0$-based estimators and generic MIO solvers, there is a steep computational price to pay when compared to popular sparse learning algorithms (e. g., based on $L_1$ regularization).

COMBINATORIAL OPTIMIZATION FEATURE SELECTION SPARSE LEARNING

Sparse Regression at Scale: Branch-and-Bound rooted in First-Order Optimization

13 Apr 2020alisaab/l0bnb

In this work, we present a new exact MIP framework for $\ell_0\ell_2$-regularized regression that can scale to $p \sim 10^7$, achieving over $3600$x speed-ups compared to the fastest exact methods.

SPARSE LEARNING

The Tradeoff Between Privacy and Accuracy in Anomaly Detection Using Federated XGBoost

16 Jul 2019Raymw/Federated-XGBoost

Our proposed federated XGBoost algorithm incorporates data aggregation and sparse federated update processes to balance the tradeoff between privacy and learning performance.

ANOMALY DETECTION SPARSE LEARNING

A General Iterative Shrinkage and Thresholding Algorithm for Non-convex Regularized Optimization Problems

18 Mar 2013icc2115/Neural-GC

A commonly used approach is the Multi-Stage (MS) convex relaxation (or DC programming), which relaxes the original non-convex problem to a sequence of convex problems.

SPARSE LEARNING