Dimensionality Reduction

702 papers with code • 0 benchmarks • 10 datasets

Dimensionality reduction is the task of reducing the dimensionality of a dataset.

( Image credit: openTSNE )

Libraries

Use these libraries to find Dimensionality Reduction models and implementations

Most implemented papers

UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction

lmcinnes/umap 9 Feb 2018

UMAP (Uniform Manifold Approximation and Projection) is a novel manifold learning technique for dimension reduction.

Adversarial Autoencoders

eriklindernoren/PyTorch-GAN 18 Nov 2015

In this paper, we propose the "adversarial autoencoder" (AAE), which is a probabilistic autoencoder that uses the recently proposed generative adversarial networks (GAN) to perform variational inference by matching the aggregated posterior of the hidden code vector of the autoencoder with an arbitrary prior distribution.

XGBoost: A Scalable Tree Boosting System

dmlc/xgboost 9 Mar 2016

In this paper, we describe a scalable end-to-end tree boosting system called XGBoost, which is used widely by data scientists to achieve state-of-the-art results on many machine learning challenges.

ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks

BangguWu/ECANet CVPR 2020

By dissecting the channel attention module in SENet, we empirically show avoiding dimensionality reduction is important for learning channel attention, and appropriate cross-channel interaction can preserve performance while significantly decreasing model complexity.

Towards K-means-friendly Spaces: Simultaneous Deep Learning and Clustering

boyangumn/DCN ICML 2017

To recover the `clustering-friendly' latent representations and to better cluster the data, we propose a joint DR and K-means clustering approach in which DR is accomplished via learning a deep neural network (DNN).

CatBoost: unbiased boosting with categorical features

catboost/catboost NeurIPS 2018

This paper presents the key algorithmic techniques behind CatBoost, a new gradient boosting toolkit.

Rethinking Spatial Dimensions of Vision Transformers

naver-ai/pit ICCV 2021

We empirically show that such a spatial dimension reduction is beneficial to a transformer architecture as well, and propose a novel Pooling-based Vision Transformer (PiT) upon the original ViT model.

Efficient Algorithms for t-distributed Stochastic Neighborhood Embedding

pavlin-policar/fastTSNE 25 Dec 2017

t-distributed Stochastic Neighborhood Embedding (t-SNE) is a method for dimensionality reduction and visualization that has become widely popular in recent years.

SOM-VAE: Interpretable Discrete Representation Learning on Time Series

ratschlab/SOM-VAE ICLR 2019

We evaluate our model in terms of clustering performance and interpretability on static (Fashion-)MNIST data, a time series of linearly interpolated (Fashion-)MNIST images, a chaotic Lorenz attractor system with two macro states, as well as on a challenging real world medical time series application on the eICU data set.

ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models

huanzhang12/ZOO-Attack 14 Aug 2017

However, different from leveraging attack transferability from substitute models, we propose zeroth order optimization (ZOO) based attacks to directly estimate the gradients of the targeted DNN for generating adversarial examples.