no code implementations • 19 Jul 2022 • Sandipan Das, Alireza M. Javid, Prakash Borpatra Gohain, Yonina C. Eldar, Saikat Chatterjee
NGP is efficient in selecting $N$ features when $N \ll P$, and it provides a notion of feature importance in a descending order following the sequential selection procedure.
no code implementations • 6 Oct 2021 • Pol Grau Jurado, Xinyue Liang, Alireza M. Javid, Saikat Chatterjee
For the existing SSFN, a part of each weight matrix is trained using a layer-wise convex optimization approach (a supervised training), while the other part is chosen as a random matrix instance (an unsupervised training).
1 code implementation • 18 Nov 2020 • Sandipan Das, Prakash B. Gohain, Alireza M. Javid, Yonina C. Eldar, Saikat Chatterjee
Using a statistical model-based data generation, we develop an experimental setup for the evaluation of neural networks (NNs).
1 code implementation • 22 Oct 2020 • Alireza M. Javid, Sandipan Das, Mikael Skoglund, Saikat Chatterjee
We use a combination of random weights and rectified linear unit (ReLU) activation function to add a ReLU dense (ReDense) layer to the trained neural network such that it can achieve a lower training loss.
no code implementations • 29 Sep 2020 • Xinyue Liang, Alireza M. Javid, Mikael Skoglund, Saikat Chatterjee
We design a low complexity decentralized learning algorithm to train a recently proposed large neural network in distributed processing nodes (workers).
no code implementations • 7 May 2020 • Alireza M. Javid, Xinyue Liang, Arun Venkitaraman, Saikat Chatterjee
We provide a predictive analysis of the spread of COVID-19, also known as SARS-CoV-2, using the dataset made publicly available online by the Johns Hopkins University.
no code implementations • 10 Apr 2020 • Xinyue Liang, Alireza M. Javid, Mikael Skoglund, Saikat Chatterjee
In this work, we exploit an asynchronous computing framework namely ARock to learn a deep neural network called self-size estimating feedforward neural network (SSFN) in a decentralized scenario.
no code implementations • 29 Mar 2020 • Alireza M. Javid, Arun Venkitaraman, Mikael Skoglund, Saikat Chatterjee
We show that the proposed architecture is norm-preserving and provides an invertible feature vector, and therefore, can be used to reduce the training cost of any other learning method which employs linear projection to estimate the target.
no code implementations • 17 May 2019 • Saikat Chatterjee, Alireza M. Javid, Mostafa Sadeghi, Shumpei Kikuta, Dong Liu, Partha P. Mitra, Mikael Skoglund
We design a self size-estimating feed-forward network (SSFN) using a joint optimization approach for estimation of number of layers, number of nodes and learning of weight matrices.
no code implementations • 12 Mar 2018 • Arun Venkitaraman, Alireza M. Javid, Saikat Chatterjee
We consider a neural network architecture with randomized features, a sign-splitter, followed by rectified linear units (ReLU).
1 code implementation • 23 Oct 2017 • Saikat Chatterjee, Alireza M. Javid, Mostafa Sadeghi, Partha P. Mitra, Mikael Skoglund
The developed network is expected to show good generalization power due to appropriate regularization and use of random weights in the layers.