When the number of pieces is unknown, we prove that, in terms of the number of distinct linear components, the neural complexities of any CPWL function are at most polynomial growth for low-dimensional inputs and factorial growth for the worst-case scenario, which are significantly better than existing results in the literature.
The proposed optimization problem is non-convex, and we propose a majorization-minimization based iterative procedure to estimate the structured matrix; each iteration solves a semidefinite program.
Specifically, we study the effects of using different numbers of subbands and various sparsity penalty terms for quasi-sparse, sparse, and dispersive systems.
To codify such a difference in nonlinearities and reveal a linear estimation property, we define ResNEsts, i. e., Residual Nonlinear Estimators, by simply dropping nonlinearities at the last residual representation from standard ResNets.
Sparse signal recovery algorithms like sparse Bayesian learning work well but the complexity quickly grows when tackling higher dimensional parametric dictionaries.
Block-sparse signal recovery without knowledge of block sizes and boundaries, such as those encountered in multi-antenna mmWave channel models, is a hard problem for compressed sensing (CS) algorithms.
In this paper, we explore the small-cell uplink access point (AP) placement problem in the context of throughput-optimality and provide solutions while taking into consideration inter-cell interference.
Each iterative unit is a neural computation module comprising of 3 sub-modules: the likelihood module, the encoder module, and the predictor module.
We propose a novel method called the Relevance Subject Machine (RSM) to solve the person re-identification (re-id) problem.
In this paper, we present an algorithm for the sparse signal recovery problem that incorporates damped Gaussian generalized approximate message passing (GGAMP) into Expectation-Maximization (EM)-based sparse Bayesian learning (SBL).
In this paper, we present a novel Bayesian approach to recover simultaneously block sparse signals in the presence of outliers.
We show that the proposed framework encompasses a large class of S-NNLS algorithms and provide a computationally efficient inference procedure based on multiplicative update rules.
In this paper, we propose a generalized scale mixture family of distributions, namely the Power Exponential Scale Mixture (PESM) family, to model the sparsity inducing priors currently in use for sparse signal recovery (SSR).
Particularly, the proposed algorithm ensured that the BCI classification and the drowsiness estimation had little degradation even when data were compressed by 80%, making it very suitable for continuous wireless telemonitoring of multichannel signals.
As a lossy compression framework, compressed sensing has drawn much attention in wireless telemonitoring of biosignals due to its ability to reduce energy consumption and make possible the design of low-power devices.
Compressed sensing (CS), as an emerging data compression methodology, is promising in catering to these constraints.
The design of a telemonitoring system via a wireless body-area network with low energy consumption for ambulatory use is highly desirable.
We examine the recovery of block sparse signals and extend the framework in two important directions; one by exploiting signals' intra-block correlation and the other by generalizing signals' block structure.