Data Compression
92 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Data Compression
Libraries
Use these libraries to find Data Compression models and implementationsMost implemented papers
DC-BENCH: Dataset Condensation Benchmark
Dataset Condensation is a newly emerging technique aiming at learning a tiny dataset that captures the rich information encoded in the original dataset.
QuadConv: Quadrature-Based Convolutions with Applications to Non-Uniform PDE Data Compression
We present a new convolution layer for deep learning architectures which we call QuadConv -- an approximation to continuous convolution via quadrature.
Fast and Multi-aspect Mining of Complex Time-stamped Event Streams
Thanks to its concise but effective summarization, CubeScope can also detect the sudden appearance of anomalies and identify the types of anomalies that occur in practice.
SegMap: 3D Segment Mapping using Data-Driven Descriptors
While current methods extract descriptors for the single task of localization, SegMap leverages a data-driven descriptor in order to extract meaningful features that can also be used for reconstructing a dense 3D map of the environment and for extracting semantic information.
XGBoost: Scalable GPU Accelerated Learning
We describe the multi-GPU gradient boosting algorithm implemented in the XGBoost library (https://github. com/dmlc/xgboost).
SAIFE: Unsupervised Wireless Spectrum Anomaly Detection with Interpretable Features
Detecting anomalous behavior in wireless spectrum is a demanding task due to the sheer complexity of the electromagnetic spectrum use.
Matrix Factorization on GPUs with Memory Optimization and Approximate Computing
Current MF implementations are either optimized for a single machine or with a need of a large computer cluster but still are insufficient.
DeepZip: Lossless Data Compression using Recurrent Neural Networks
We combine recurrent neural network predictors with an arithmetic coder and losslessly compress a variety of synthetic, text and genomic datasets.
Pareto-optimal data compression for binary classification tasks
The goal of lossy data compression is to reduce the storage cost of a data set $X$ while retaining as much information as possible about something ($Y$) that you care about.
LFZip: Lossy compression of multivariate floating-point time series data via improved prediction
Time series data compression is emerging as an important problem with the growth in IoT devices and sensors.