Search Results for author: Anthony S. Maida

Found 19 papers, 5 papers with code

LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks

no code implementations12 Jan 2023 Nelly Elsayed, Zag ElSayed, Anthony S. Maida

Long short-term memory (LSTM) is one of the robust recurrent neural network architectures for learning sequential data.

Speech Emotion Recognition

Deep Residual Axial Networks

no code implementations11 Jan 2023 Nazmul Shahadat, Anthony S. Maida

The axial CNNs are predicated on the assumption that the dataset supports approximately separable convolution operations with little or no loss of training accuracy.

Image Classification Image Super-Resolution

Enhancing ResNet Image Classification Performance by using Parameterized Hypercomplex Multiplication

no code implementations11 Jan 2023 Nazmul Shahadat, Anthony S. Maida

Recently, many deep networks have introduced hypercomplex and related calculations into their architectures.

Image Classification

Deep Axial Hypercomplex Networks

no code implementations11 Jan 2023 Nazmul Shahadat, Anthony S. Maida

We conduct experiments on CIFAR benchmarks, SVHN, and Tiny ImageNet datasets and achieve better performance with fewer trainable parameters and FLOPS.

Image Classification

Vision-Based American Sign Language Classification Approach via Deep Learning

no code implementations8 Apr 2022 Nelly Elsayed, Zag ElSayed, Anthony S. Maida

Hearing-impaired is the disability of partial or total hearing loss that causes a significant problem for communication with other people in society.

LiteLSTM Architecture for Deep Recurrent Neural Networks

no code implementations27 Jan 2022 Nelly Elsayed, Zag ElSayed, Anthony S. Maida

Long short-term memory (LSTM) is a robust recurrent neural network architecture for learning spatiotemporal sequential data.

Improving Axial-Attention Network Classification via Cross-Channel Weight Sharing

1 code implementation4 Oct 2021 Nazmul Shahadat, Anthony S. Maida

In recent years, hypercomplex-inspired neural networks (HCNNs) have been used to improve deep learning architectures due to their ability to enable channel-based weight sharing, treat colors as a single entity, and improve representational coherence within the layers.

Classification Image Classification

Removing Dimensional Restrictions on Complex/Hyper-complex Convolutions

no code implementations28 Sep 2020 Chase John Gaudet, Anthony S. Maida

It has been shown that the core reasons that complex and hypercomplex valued neural networks offer improvements over their real-valued counterparts is the fact that aspects of their algebra forces treating multi-dimensional data as a single entity (forced local relationship encoding) with an added benefit of reducing parameter count via weight sharing.

Generalizing Complex/Hyper-complex Convolutions to Vector Map Convolutions

no code implementations9 Sep 2020 Chase J Gaudet, Anthony S. Maida

We show that the core reasons that complex and hypercomplex valued neural networks offer improvements over their real-valued counterparts is the weight sharing mechanism and treating multidimensional data as a single entity.

Inception-inspired LSTM for Next-frame Video Prediction

2 code implementations28 Aug 2019 Matin Hosseini, Anthony S. Maida, Majid Hosseini, Gottumukkala Raju

The proposed Inception LSTM methods are compared with convolutional LSTM when applied using PredNet predictive coding framework for both the KITTI and KTH data sets.

Autonomous Vehicles Image Classification +1

Deep Gated Recurrent and Convolutional Network Hybrid Model for Univariate Time Series Classification

1 code implementation18 Dec 2018 Nelly Elsayed, Anthony S. Maida, Magdy Bayoumi

Hybrid LSTM-fully convolutional networks (LSTM-FCN) for time series classification have produced state-of-the-art classification results on univariate time series.

Classification General Classification +3

Reduced-Gate Convolutional LSTM Using Predictive Coding for Spatiotemporal Prediction

1 code implementation16 Oct 2018 Nelly Elsayed, Anthony S. Maida, Magdy Bayoumi

Our reduced-gate model achieves equal or better next-frame(s) prediction accuracy than the original convolutional LSTM while using a smaller parameter budget, thereby reducing training time.

Video Prediction

Deep Learning in Spiking Neural Networks

2 code implementations22 Apr 2018 Amirhossein Tavanaei, Masoud Ghodrati, Saeed Reza Kheradpisheh, Timothee Masquelier, Anthony S. Maida

In this approach, a deep (multilayer) artificial neural network (ANN) is trained in a supervised manner using backpropagation.

BP-STDP: Approximating Backpropagation using Spike Timing Dependent Plasticity

no code implementations12 Nov 2017 Amirhossein Tavanaei, Anthony S. Maida

This approach enjoys benefits of both accurate gradient descent and temporally local, efficient STDP.

Acquisition of Visual Features Through Probabilistic Spike-Timing-Dependent Plasticity

no code implementations3 Jun 2016 Amirhossein Tavanaei, Timothee Masquelier, Anthony S. Maida

The original model showed that a spike-timing-dependent plasticity (STDP) learning algorithm embedded in an appropriately selected SCN could perform unsupervised feature discovery.

A Spiking Network that Learns to Extract Spike Signatures from Speech Signals

no code implementations2 Jun 2016 Amirhossein Tavanaei, Anthony S. Maida

Spiking neural networks (SNNs) with adaptive synapses reflect core properties of biological neural networks.

speech-recognition Speech Recognition

Training a Hidden Markov Model with a Bayesian Spiking Neural Network

no code implementations2 Jun 2016 Amirhossein Tavanaei, Anthony S. Maida

The emission (observation) probabilities of the HMM are represented in the SNN and trained with the STDP rule.

Cannot find the paper you are looking for? You can Submit a new open access paper.