# Sequential Image Classification

31 papers with code • 3 benchmarks • 2 datasets

Sequential image classification is the task of classifying a sequence of images.

( Image credit: TensorFlow-101 )

## Libraries

Use these libraries to find Sequential Image Classification models and implementations## Most implemented papers

# An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling

Our results indicate that a simple convolutional architecture outperforms canonical recurrent networks such as LSTMs across a diverse range of tasks and datasets, while demonstrating longer effective memory.

# Independently Recurrent Neural Network (IndRNN): Building A Longer and Deeper RNN

Experimental results have shown that the proposed IndRNN is able to process very long sequences (over 5000 time steps), can be used to construct very deep networks (21 layers used in the experiment) and still be trained robustly.

# A Simple Way to Initialize Recurrent Networks of Rectified Linear Units

Learning long term dependencies in recurrent networks is difficult due to vanishing and exploding gradients.

# Recurrent Batch Normalization

We propose a reparameterization of LSTM that brings the benefits of batch normalization to recurrent neural networks.

# Gating Revisited: Deep Multi-layer RNNs That Can Be Trained

We propose a new STAckable Recurrent cell (STAR) for recurrent neural networks (RNNs), which has fewer parameters than widely used LSTM and GRU while being more robust against vanishing or exploding gradients.

# HiPPO: Recurrent Memory with Optimal Polynomial Projections

A central problem in learning from sequential data is representing cumulative history in an incremental fashion as more data is processed.

# Efficiently Modeling Long Sequences with Structured State Spaces

A central goal of sequence modeling is designing a single principled model that can address sequence data across a range of modalities and tasks, particularly on long-range dependencies.

# Unitary Evolution Recurrent Neural Networks

When the eigenvalues of the hidden to hidden weight matrix deviate from absolute value 1, optimization becomes difficult due to the well studied issue of vanishing and exploding gradients, especially when trying to learn long-term dependencies.

# Full-Capacity Unitary Recurrent Neural Networks

To address this question, we propose full-capacity uRNNs that optimize their recurrence matrix over all unitary matrices, leading to significantly improved performance over uRNNs that use a restricted-capacity recurrence matrix.

# Dilated Recurrent Neural Networks

To provide a theory-based quantification of the architecture's advantages, we introduce a memory capacity measure, the mean recurrent length, which is more suitable for RNNs with long skip connections than existing measures.