Data Compression

94 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?


Use these libraries to find Data Compression models and implementations
2 papers

Most implemented papers

XGBoost: A Scalable Tree Boosting System

dmlc/xgboost 9 Mar 2016

In this paper, we describe a scalable end-to-end tree boosting system called XGBoost, which is used widely by data scientists to achieve state-of-the-art results on many machine learning challenges.

DNABERT-2: Efficient Foundation Model and Benchmark For Multi-Species Genome

magics-lab/dnabert_2 26 Jun 2023

Decoding the linguistic intricacies of the genome is a crucial problem in biology, and pre-trained foundational models such as DNABERT and Nucleotide Transformer have made significant strides in this area.

Efficient Manifold and Subspace Approximations with Spherelets

david-dunson/GeodesicDistance 26 Jun 2017

There is a rich literature on approximating the unknown manifold, and on exploiting such approximations in clustering, data compression, and prediction.

Transformer-based Transform Coding

Nikolai10/SwinT-ChARM ICLR 2022

Neural data compression based on nonlinear transform coding has made great progress over the last few years, mainly due to improvements in prior models, quantization methods and nonlinear transforms.

Norm-Explicit Quantization: Improving Vector Quantization for Maximum Inner Product Search

xinyandai/product-quantization 12 Nov 2019

In this paper, we present a new angle to analyze the quantization error, which decomposes the quantization error into norm error and direction error.

ReduNet: A White-box Deep Network from the Principle of Maximizing Rate Reduction

Ma-Lab-Berkeley/ReduNet 21 May 2021

This work attempts to provide a plausible theoretical framework that aims to interpret modern deep (convolutional) networks from the principles of data compression and discriminative representation.

Supervised Compression for Resource-Constrained Edge Computing Systems

yoshitomo-matsubara/supervised-compression 21 Aug 2021

There has been much interest in deploying deep learning algorithms on low-powered devices, including smartphones, drones, and medical sensors.

Towards Empirical Sandwich Bounds on the Rate-Distortion Function

mandt-lab/empirical-RD-sandwich ICLR 2022

By contrast, this paper makes the first attempt at an algorithm for sandwiching the R-D function of a general (not necessarily discrete) source requiring only i. i. d.

BottleFit: Learning Compressed Representations in Deep Neural Networks for Effective and Efficient Split Computing

yoshitomo-matsubara/bottlefit-split_computing 7 Jan 2022

We show that BottleFit decreases power consumption and latency respectively by up to 49% and 89% with respect to (w. r. t.)

An Introduction to Neural Data Compression

tensorflow/compression 14 Feb 2022

Neural compression is the application of neural networks and other machine learning methods to data compression.