Search Results for author: Alireza M. Javid

Found 11 papers, 3 papers with code

Neural Greedy Pursuit for Feature Selection

no code implementations19 Jul 2022 Sandipan Das, Alireza M. Javid, Prakash Borpatra Gohain, Yonina C. Eldar, Saikat Chatterjee

NGP is efficient in selecting $N$ features when $N \ll P$, and it provides a notion of feature importance in a descending order following the sequential selection procedure.

Feature Importance feature selection

Use of Deterministic Transforms to Design Weight Matrices of a Neural Network

no code implementations6 Oct 2021 Pol Grau Jurado, Xinyue Liang, Alireza M. Javid, Saikat Chatterjee

For the existing SSFN, a part of each weight matrix is trained using a layer-wise convex optimization approach (a supervised training), while the other part is chosen as a random matrix instance (an unsupervised training).

Statistical model-based evaluation of neural networks

1 code implementation18 Nov 2020 Sandipan Das, Prakash B. Gohain, Alireza M. Javid, Yonina C. Eldar, Saikat Chatterjee

Using a statistical model-based data generation, we develop an experimental setup for the evaluation of neural networks (NNs).

A ReLU Dense Layer to Improve the Performance of Neural Networks

1 code implementation22 Oct 2020 Alireza M. Javid, Sandipan Das, Mikael Skoglund, Saikat Chatterjee

We use a combination of random weights and rectified linear unit (ReLU) activation function to add a ReLU dense (ReDense) layer to the trained neural network such that it can achieve a lower training loss.

A Low Complexity Decentralized Neural Net with Centralized Equivalence using Layer-wise Learning

no code implementations29 Sep 2020 Xinyue Liang, Alireza M. Javid, Mikael Skoglund, Saikat Chatterjee

We design a low complexity decentralized learning algorithm to train a recently proposed large neural network in distributed processing nodes (workers).

Predictive Analysis of COVID-19 Time-series Data from Johns Hopkins University

no code implementations7 May 2020 Alireza M. Javid, Xinyue Liang, Arun Venkitaraman, Saikat Chatterjee

We provide a predictive analysis of the spread of COVID-19, also known as SARS-CoV-2, using the dataset made publicly available online by the Johns Hopkins University.

Time Series Time Series Analysis

Asynchronous Decentralized Learning of a Neural Network

no code implementations10 Apr 2020 Xinyue Liang, Alireza M. Javid, Mikael Skoglund, Saikat Chatterjee

In this work, we exploit an asynchronous computing framework namely ARock to learn a deep neural network called self-size estimating feedforward neural network (SSFN) in a decentralized scenario.

High-dimensional Neural Feature Design for Layer-wise Reduction of Training Cost

no code implementations29 Mar 2020 Alireza M. Javid, Arun Venkitaraman, Mikael Skoglund, Saikat Chatterjee

We show that the proposed architecture is norm-preserving and provides an invertible feature vector, and therefore, can be used to reduce the training cost of any other learning method which employs linear projection to estimate the target.

SSFN -- Self Size-estimating Feed-forward Network with Low Complexity, Limited Need for Human Intervention, and Consistent Behaviour across Trials

no code implementations17 May 2019 Saikat Chatterjee, Alireza M. Javid, Mostafa Sadeghi, Shumpei Kikuta, Dong Liu, Partha P. Mitra, Mikael Skoglund

We design a self size-estimating feed-forward network (SSFN) using a joint optimization approach for estimation of number of layers, number of nodes and learning of weight matrices.

Image Classification

R3Net: Random Weights, Rectifier Linear Units and Robustness for Artificial Neural Network

no code implementations12 Mar 2018 Arun Venkitaraman, Alireza M. Javid, Saikat Chatterjee

We consider a neural network architecture with randomized features, a sign-splitter, followed by rectified linear units (ReLU).

Progressive Learning for Systematic Design of Large Neural Networks

1 code implementation23 Oct 2017 Saikat Chatterjee, Alireza M. Javid, Mostafa Sadeghi, Partha P. Mitra, Mikael Skoglund

The developed network is expected to show good generalization power due to appropriate regularization and use of random weights in the layers.

Cannot find the paper you are looking for? You can Submit a new open access paper.