no code implementations • 12 Jan 2023 • Nelly Elsayed, Zag ElSayed, Anthony S. Maida
Long short-term memory (LSTM) is one of the robust recurrent neural network architectures for learning sequential data.
no code implementations • 11 Jan 2023 • Nazmul Shahadat, Anthony S. Maida
The axial CNNs are predicated on the assumption that the dataset supports approximately separable convolution operations with little or no loss of training accuracy.
no code implementations • 11 Jan 2023 • Nazmul Shahadat, Anthony S. Maida
Recently, many deep networks have introduced hypercomplex and related calculations into their architectures.
no code implementations • 11 Jan 2023 • Nazmul Shahadat, Anthony S. Maida
We conduct experiments on CIFAR benchmarks, SVHN, and Tiny ImageNet datasets and achieve better performance with fewer trainable parameters and FLOPS.
no code implementations • 8 Apr 2022 • Nelly Elsayed, Zag ElSayed, Anthony S. Maida
Hearing-impaired is the disability of partial or total hearing loss that causes a significant problem for communication with other people in society.
no code implementations • 27 Jan 2022 • Nelly Elsayed, Zag ElSayed, Anthony S. Maida
Long short-term memory (LSTM) is a robust recurrent neural network architecture for learning spatiotemporal sequential data.
1 code implementation • 4 Oct 2021 • Nazmul Shahadat, Anthony S. Maida
In recent years, hypercomplex-inspired neural networks (HCNNs) have been used to improve deep learning architectures due to their ability to enable channel-based weight sharing, treat colors as a single entity, and improve representational coherence within the layers.
no code implementations • 28 Sep 2020 • Chase John Gaudet, Anthony S. Maida
It has been shown that the core reasons that complex and hypercomplex valued neural networks offer improvements over their real-valued counterparts is the fact that aspects of their algebra forces treating multi-dimensional data as a single entity (forced local relationship encoding) with an added benefit of reducing parameter count via weight sharing.
no code implementations • 9 Sep 2020 • Chase J Gaudet, Anthony S. Maida
We show that the core reasons that complex and hypercomplex valued neural networks offer improvements over their real-valued counterparts is the weight sharing mechanism and treating multidimensional data as a single entity.
2 code implementations • 28 Aug 2019 • Matin Hosseini, Anthony S. Maida, Majid Hosseini, Gottumukkala Raju
The proposed Inception LSTM methods are compared with convolutional LSTM when applied using PredNet predictive coding framework for both the KITTI and KTH data sets.
1 code implementation • 18 Dec 2018 • Nelly Elsayed, Anthony S. Maida, Magdy Bayoumi
Hybrid LSTM-fully convolutional networks (LSTM-FCN) for time series classification have produced state-of-the-art classification results on univariate time series.
1 code implementation • 16 Oct 2018 • Nelly Elsayed, Anthony S. Maida, Magdy Bayoumi
Our reduced-gate model achieves equal or better next-frame(s) prediction accuracy than the original convolutional LSTM while using a smaller parameter budget, thereby reducing training time.
no code implementations • 27 Sep 2018 • Nelly Elsayed, Anthony S. Maida, Magdy Bayoumi
Spatiotemporal sequence prediction is an important problem in deep learning.
2 code implementations • 22 Apr 2018 • Amirhossein Tavanaei, Masoud Ghodrati, Saeed Reza Kheradpisheh, Timothee Masquelier, Anthony S. Maida
In this approach, a deep (multilayer) artificial neural network (ANN) is trained in a supervised manner using backpropagation.
no code implementations • 12 Nov 2017 • Amirhossein Tavanaei, Anthony S. Maida
This approach enjoys benefits of both accurate gradient descent and temporally local, efficient STDP.
no code implementations • 9 Nov 2016 • Amirhossein Tavanaei, Anthony S. Maida
Kernels for the convolutional layer are trained using local learning.
no code implementations • 3 Jun 2016 • Amirhossein Tavanaei, Timothee Masquelier, Anthony S. Maida
The original model showed that a spike-timing-dependent plasticity (STDP) learning algorithm embedded in an appropriately selected SCN could perform unsupervised feature discovery.
no code implementations • 2 Jun 2016 • Amirhossein Tavanaei, Anthony S. Maida
Spiking neural networks (SNNs) with adaptive synapses reflect core properties of biological neural networks.
no code implementations • 2 Jun 2016 • Amirhossein Tavanaei, Anthony S. Maida
The emission (observation) probabilities of the HMM are represented in the SNN and trained with the STDP rule.