This chapter is composed of four main parts: tools for visualizing intermediate layers in a DNN, denoising data representations, optimizing graph objective functions and regularizing the learning process.
It is very common to face classification problems where the number of available labeled samples is small compared to their dimension.
Measuring the generalization performance of a Deep Neural Network (DNN) without relying on a validation set is a difficult task.
In the context of few-shot learning, one cannot measure the generalization ability of a trained classifier using validation sets, due to the small number of labeled samples.
Specifically we introduce a graph-based RKD method, in which graphs are used to capture the geometry of latent spaces.
Predicting the future of Graph-supported Time Series (GTS) is a key challenge in many domains, such as climate monitoring, finance or neuroimaging.
Convolutional Neural Networks are very efficient at processing signals defined on a discrete Euclidean space (such as images).
We introduce a novel loss function for training deep learning architectures to perform classification.