Search Results for author: Miguel A. Carreira-Perpinan

Found 6 papers, 0 papers with code

A fast, universal algorithm to learn parametric nonlinear embeddings

no code implementations NeurIPS 2015 Miguel A. Carreira-Perpinan, Max Vladymyrov

This has two advantages: 1) The algorithm is universal in that a specific learning algorithm for any choice of embedding and mapping can be constructed by simply reusing existing algorithms for the embedding and for the mapping.

Dimensionality Reduction

Structured Multi-Hashing for Model Compression

no code implementations CVPR 2020 Elad Eban, Yair Movshovitz-Attias, Hao Wu, Mark Sandler, Andrew Poon, Yerlan Idelbayev, Miguel A. Carreira-Perpinan

Despite the success of deep neural networks (DNNs), state-of-the-art models are too large to deploy on low-resource devices or common server configurations in which multiple models are held in memory.

Model Compression

Optimal Quantization Using Scaled Codebook

no code implementations CVPR 2021 Yerlan Idelbayev, Pavlo Molchanov, Maying Shen, Hongxu Yin, Miguel A. Carreira-Perpinan, Jose M. Alvarez

We study the problem of quantizing N sorted, scalar datapoints with a fixed codebook containing K entries that are allowed to be rescaled.

Quantization

Faster Neural Net Inference via Forests of Sparse Oblique Decision Trees

no code implementations29 Sep 2021 Yerlan Idelbayev, Arman Zharmagambetov, Magzhan Gabidolla, Miguel A. Carreira-Perpinan

We show that neural nets can be further compressed by replacing layers of it with a special type of decision forest.

Quantization

Softmax Tree: An Accurate, Fast Classifier When the Number of Classes Is Large

no code implementations EMNLP 2021 Arman Zharmagambetov, Magzhan Gabidolla, Miguel A. Carreira-Perpinan

Classification problems having thousands or more classes naturally occur in NLP, for example language models or document classification.

Document Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.