A Comparative Study on Efficiencies of Variants of Convolutional Neural Networks based on Image Classification Task

15 Oct 2020  ·  Ayush Sharma ·

Deep neural networks have shown their high performance on image classification tasks but meanwhile more training difficulties. Due to its complexity and vanishing gradient, it usually takes a long time and a lot of computational resources to train deeper neural networks. Deep Residual networks (ResNets), however, can make the training process easier and faster. And at the same time, it achieves better accuracy compared to their equivalent neural networks. Deep Residual Networks have been proven to be a very successful model on image classification. Deep neural networks demonstrate to have a high performance on image classification tasks while being more difficult to train. We built two very different networks from scratch based on the idea of Densely Connected Convolution Networks. The architecture of the networks is designed based on the image resolution of this specific dataset and by calculating the Receptive Field of the convolution layers. We also used some non-conventional techniques related to image augmentation and Early stopping to improve the accuracy of our models. The networks are trained under high constraints and low computation resources



Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.