Search Results for author: Luiz M Franca-Neto

Found 2 papers, 0 papers with code

A Directed-Evolution Method for Sparsification and Compression of Neural Networks with Application to Object Identification and Segmentation and considerations of optimal quantization using small number of bits

no code implementations12 Jun 2022 Luiz M Franca-Neto

After the desired sparsification level is reached in each layer of the network by DE, a variety of quantization alternatives are used on the surviving parameters to find the lowest number of bits for their representation with acceptable loss of accuracy.

Quantization

Field-Programmable Deep Neural Network (DNN) Learning and Inference accelerator: a concept

no code implementations14 Feb 2018 Luiz M Franca-Neto

The computational delay per layer is made roughly the same along pipelined accelerator structure.

Cannot find the paper you are looking for? You can Submit a new open access paper.