The quantization methods, relevant in the context of an embedded execution onto a microcontroller, are first outlined.
Designing deep learning-based solutions is becoming a race for training deeper models with a greater number of layers.
Learning deep representations to solve complex machine learning tasks has become the prominent trend in the past few years.
Typical deep convolutional architectures present an increasing number of feature maps as we go deeper in the network, whereas spatial resolution of inputs is decreased through downsampling operations.
Neural networks have demonstrably achieved state-of-the art accuracy using low-bitlength integer quantization, yielding both execution time and energy benefits on existing hardware designs that support short bitlengths.
Because deep neural networks (DNNs) rely on a large number of parameters and computations, their implementation in energy-constrained systems is challenging.
In this paper, we tackle the problem of incrementally learning a classifier, one example at a time, directly on chip.
Specifically we introduce a graph-based RKD method, in which graphs are used to capture the geometry of latent spaces.
In many application domains such as computer vision, Convolutional Layers (CLs) are key to the accuracy of deep learning methods.
We introduce a novel loss function for training deep learning architectures to perform classification.
Convolutional Neural Networks (CNNs) are state-of-the-art in numerous computer vision tasks such as object classification and detection.
Deep learning-based methods have reached state of the art performances, relying on large quantity of available data and computational power.