Handwritten Digit Recognition
28 papers with code • 2 benchmarks • 6 datasets
Most implemented papers
LipschitzLR: Using theoretically computed adaptive learning rates for fast convergence
In this paper, we propose a novel method to compute the learning rate for training deep neural networks with stochastic gradient descent.
How Important is Weight Symmetry in Backpropagation?
Gradient backpropagation (BP) requires symmetric feedforward and feedback connections -- the same weights must be used for forward and backward passes.
Compressing deep neural networks on FPGAs to binary and ternary precision with HLS4ML
We discuss the trade-off between model accuracy and resource consumption.
MNIST-MIX: A Multi-language Handwritten Digit Recognition Dataset
In this letter, we contribute a multi-language handwritten digit recognition dataset named MNIST-MIX, which is the largest dataset of the same type in terms of both languages and data samples.
Integrated Gradient Correlation: a Dataset-wise Attribution Method
Attribution methods are primarily designed to study the distribution of input component contributions to individual model predictions.
Gradient-based learning applied to document recognition
It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques.
Deep Big Simple Neural Nets Excel on Handwritten Digit Recognition
Good old on-line back-propagation for plain multi-layer perceptrons yields a very low 0. 35% error rate on the famous MNIST handwritten digits benchmark.
A neuromorphic hardware architecture using the Neural Engineering Framework for pattern recognition
The architecture is not limited to handwriting recognition, but is generally applicable as an extremely fast pattern recognition processor for various kinds of patterns such as speech and images.
Large-scale Artificial Neural Network: MapReduce-based Deep Learning
Faced with continuously increasing scale of data, original back-propagation neural network based machine learning algorithm presents two non-trivial challenges: huge amount of data makes it difficult to maintain both efficiency and accuracy; redundant data aggravates the system workload.
Group Sparse Regularization for Deep Neural Networks
In this paper, we consider the joint task of simultaneously optimizing (i) the weights of a deep neural network, (ii) the number of neurons for each hidden layer, and (iii) the subset of active input features (i. e., feature selection).