Neuroscientific data analysis has traditionally relied on linear algebra and stochastic process theory.
We introduce a generative and fitting model pair ("Misparametrized Sparse Regression" or MiSpaR) and show that the overfitting peak can be dissociated from the point at which the fitting function gains enough dof's to match the data generative model and thus provides good generalization.
We design a self size-estimating feed-forward network (SSFN) using a joint optimization approach for estimation of number of layers, number of nodes and learning of weight matrices.
Our expectation is that local estimates in each node improve fast and converge, resulting in a limited demand for communication of estimates between nodes and reducing the processing time.
This analysis is made possible since the SGD algorithm reduces to a stochastic linear system near the interpolating minimum of the loss function.
The developed network is expected to show good generalization power due to appropriate regularization and use of random weights in the layers.
The algorithm is iterative and exchanges intermediate estimates of a sparse signal over a network.