Data augmentation is a very practical technique that can be used to improve the generalization ability of neural networks and prevent overfitting.
By improving the quantity and diversity of training data, data augmentation has become an inevitable part of deep learning model training with image data.
A special branch of adversarial examples, namely sparse adversarial examples, can fool the target DNNs by perturbing only a few pixels.
We perform extensive experiments to prove that pruning based on the influence function using the idea of ensemble learning will be much more effective than just focusing on error reconstruction.
With the development of the deep network and the release for a series of large scale datasets for single object tracking, siamese networks have been proposed and perform better than most of the traditional methods.
Inspired by the elastic collision model in physics, we present a general structure which can be integrated into the existing CNNs to improve their performance.
So we propose an exploratory architecture referred to Temporal Convolutional Attention-based Network (TCAN) which combines temporal convolutional network and attention mechanism.
PIGAT introduces the attention mechanism to consider the importance of each interacted user/item to both the user and the item, which captures user interests, item attractions and their influence on the recommendation context.
In this paper, it is the first time to discuss the difficulty without support of old classes in class incremental learning, which is called as softmax suppression problem.