Search Results for author: Parichay Kapoor

Found 7 papers, 0 papers with code

NNStreamer: Efficient and Agile Development of On-Device AI Systems

no code implementations16 Jan 2021 MyungJoo Ham, Jijoong Moon, Geunsik Lim, Jaeyun Jung, Hyoungjoo Ahn, Wook Song, Sangjung Woo, Parichay Kapoor, Dongju Chae, Gichan Jang, Yongjoo Ahn, Jihoon Lee

NNStreamer efficiently handles neural networks with complex data stream pipelines on devices, significantly improving the overall performance with minimal efforts.

Network Pruning for Low-Rank Binary Index

no code implementations25 Sep 2019 Dongsoo Lee, Se Jung Kwon, Byeongwook Kim, Parichay Kapoor, Gu-Yeon Wei

In this paper, we propose a new network pruning technique that generates a low-rank binary index matrix to compress index data significantly.

Model Compression Network Pruning +1

Structured Compression by Weight Encryption for Unstructured Pruning and Quantization

no code implementations CVPR 2020 Se Jung Kwon, Dongsoo Lee, Byeongwook Kim, Parichay Kapoor, Baeseong Park, Gu-Yeon Wei

Model compression techniques, such as pruning and quantization, are becoming increasingly important to reduce the memory footprints and the amount of computations.

Model Compression Quantization

Network Pruning for Low-Rank Binary Indexing

no code implementations14 May 2019 Dongsoo Lee, Se Jung Kwon, Byeongwook Kim, Parichay Kapoor, Gu-Yeon Wei

Pruning is an efficient model compression technique to remove redundancy in the connectivity of deep neural networks (DNNs).

Model Compression Network Pruning

DeepTwist: Learning Model Compression via Occasional Weight Distortion

no code implementations30 Oct 2018 Dongsoo Lee, Parichay Kapoor, Byeongwook Kim

Model compression has been introduced to reduce the required hardware resources while maintaining the model accuracy.

Model Compression Quantization

Computation-Efficient Quantization Method for Deep Neural Networks

no code implementations27 Sep 2018 Parichay Kapoor, Dongsoo Lee, Byeongwook Kim, Saehyung Lee

We present a non-intrusive quantization technique based on re-training the full precision model, followed by directly optimizing the corresponding binary model.

Quantization

A method of limiting performance loss of CNNs in noisy environments

no code implementations3 Feb 2017 James R. Geraci, Parichay Kapoor

Convolutional Neural Network (CNN) recognition rates drop in the presence of noise.

Denoising

Cannot find the paper you are looking for? You can Submit a new open access paper.