no code implementations • 11 Jan 2023 • Nazmul Shahadat, Anthony S. Maida
The axial CNNs are predicated on the assumption that the dataset supports approximately separable convolution operations with little or no loss of training accuracy.
no code implementations • 11 Jan 2023 • Nazmul Shahadat, Anthony S. Maida
We conduct experiments on CIFAR benchmarks, SVHN, and Tiny ImageNet datasets and achieve better performance with fewer trainable parameters and FLOPS.
no code implementations • 11 Jan 2023 • Nazmul Shahadat, Anthony S. Maida
Recently, many deep networks have introduced hypercomplex and related calculations into their architectures.
1 code implementation • 4 Oct 2021 • Nazmul Shahadat, Anthony S. Maida
In recent years, hypercomplex-inspired neural networks (HCNNs) have been used to improve deep learning architectures due to their ability to enable channel-based weight sharing, treat colors as a single entity, and improve representational coherence within the layers.