Improving Spiking Sparse Recovery via Non-Convex Penalties

19 Sep 2020  ·  Xiang Zhang, Lei Yu, Gang Zheng ·

Compared with digital methods, sparse recovery based on spiking neural networks has great advantages like high computational efficiency and low power-consumption. However, current spiking algorithms cannot guarantee more accurate estimates since they are usually designed to solve the classical optimization with convex penalties, especially the $\ell_{1}$-norm. In fact, convex penalties are observed to underestimate the true solution in practice, while non-convex ones can avoid the underestimation. Inspired by this, we propose an adaptive version of spiking sparse recovery algorithm to solve the non-convex regularized optimization, and provide an analysis on its global asymptotic convergence. Through experiments, the accuracy is greatly improved under different adaptive ways.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here