Search Results for author: Mayank Sharma

Found 6 papers, 1 papers with code

Smaller Models, Better Generalization

no code implementations29 Aug 2019 Mayank Sharma, Suraj Tripathi, Abhimanyu Dubey, Jayadeva, Sai Guruju, Nihal Goalla

Reducing network complexity has been a major research focus in recent years with the advent of mobile technology.

Quantization

Effect of Various Regularizers on Model Complexities of Neural Networks in Presence of Input Noise

no code implementations31 Jan 2019 Mayank Sharma, Aayush Yadav, Sumit Soman, Jayadeva

We show that $L_2$ regularization leads to a simpler hypothesis class and better generalization followed by DARC1 regularizer, both for shallow as well as deeper architectures.

Radius-margin bounds for deep neural networks

no code implementations3 Nov 2018 Mayank Sharma, Jayadeva, Sumit Soman

Explaining the unreasonable effectiveness of deep learning has eluded researchers around the globe.

Learning Neural Network Classifiers with Low Model Complexity

no code implementations31 Jul 2017 Jayadeva, Himanshu Pant, Mayank Sharma, Abhimanyu Dubey, Sumit Soman, Suraj Tripathi, Sai Guruju, Nihal Goalla

Our proposed approach yields benefits across a wide range of architectures, in comparison to and in conjunction with methods such as Dropout and Batch Normalization, and our results strongly suggest that deep learning techniques can benefit from model complexity control methods such as the LCNN learning rule.

Scalable Twin Neural Networks for Classification of Unbalanced Data

1 code implementation30 Apr 2017 Jayadeva, Himanshu Pant, Sumit Soman, Mayank Sharma

In this paper, we discuss a Twin Neural Network (Twin NN) architecture for learning from large unbalanced datasets.

Classification General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.