Compressing deep quaternion neural networks with targeted regularization

26 Jul 2019  ·  Riccardo Vecchi, Simone Scardapane, Danilo Comminiello, Aurelio Uncini ·

In recent years, hyper-complex deep networks (such as complex-valued and quaternion-valued neural networks) have received a renewed interest in the literature. They find applications in multiple fields, ranging from image reconstruction to 3D audio processing. Similar to their real-valued counterparts, quaternion neural networks (QVNNs) require custom regularization strategies to avoid overfitting. In addition, for many real-world applications and embedded implementations, there is the need of designing sufficiently compact networks, with few weights and neurons. However, the problem of regularizing and/or sparsifying QVNNs has not been properly addressed in the literature as of now. In this paper, we show how to address both problems by designing targeted regularization strategies, which are able to minimize the number of connections and neurons of the network during training. To this end, we investigate two extensions of l1 and structured regularization to the quaternion domain. In our experimental evaluation, we show that these tailored strategies significantly outperform classical (real-valued) regularization approaches, resulting in small networks especially suitable for low-power and real-time applications.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here