Modified swarm-based metaheuristics enhance Gradient Descent initialization performance: Application for EEG spatial filtering

13 Jun 2019  ·  Mojtaba Moattari, Mohammad Hassan Moradi, Reza Boostani ·

Gradient Descent (GD) approximators often fail in the solution space with multiple scales of convexities, i.e., in subspace learning and neural network scenarios. To handle that, one solution is to run GD multiple times from different randomized initial states and select the best solution over all experiments. However, this idea is proved impractical in plenty of cases. Even Swarm-based optimizers like Particle Swarm Optimization (PSO) or Imperialistic Competitive Algorithm (ICA), as commonly used GD initializers, have failed to find optimal solutions in some applications. In this paper, Swarm-based optimizers like ICA and PSO are modified by a new optimization framework to improve GD optimization performance. This improvement is for applications with high number of convex localities in multiple scales. Performance of the proposed method is analyzed in a nonlinear subspace filtering objective function over EEG data. The proposed metaheuristic outperforms commonly used baseline optimizers as GD initializers in both the EEG classification accuracy and EEG loss function fitness. The optimizers have been also compared to each other in some of CEC 2014 benchmark functions, where again our method outperforms other algorithms.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods