Search Results for author: Saeed Damadi

Found 4 papers, 0 papers with code

Learning a Sparse Neural Network using IHT

no code implementations29 Apr 2024 Saeed Damadi, Soroush Zolfaghari, Mahdi Rezaie, Jinglai Shen

This paper aims to investigate whether the theoretical prerequisites for such convergence are applicable in the realm of neural network (NN) training by providing justification for all the necessary conditions for convergence.

The Backpropagation algorithm for a math student

no code implementations22 Jan 2023 Saeed Damadi, Golnaz Moharrer, Mostafa Cham

A Deep Neural Network (DNN) is a composite function of vector-valued functions, and in order to train a DNN, it is necessary to calculate the gradient of the loss function with respect to all parameters.

Math valid

Convergence of the mini-batch SIHT algorithm

no code implementations29 Sep 2022 Saeed Damadi, Jinglai Shen

To the best of our knowledge, in the regime of sparse optimization, this is the first time in the literature that it is shown that the sequence of the stochastic function values converges with probability one by fixing the mini-batch size for all steps.

Amenable Sparse Network Investigator

no code implementations18 Feb 2022 Saeed Damadi, Erfan Nouri, Hamed Pirsiavash

ASNI-II learns a sparse network and an initialization that is quantized, compressed, and from which the sparse network is trainable.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.