no code implementations • 16 Feb 2021 • Kartikeya Badola, Sameer Ambekar, Himanshu Pant, Sumit Soman, Anuradha Sural, Rajiv Narang, Suresh Chandra, Jayadeva
We show that popular choices of dataset selection suffer from data homogeneity, leading to misleading results.
no code implementations • 20 Nov 2020 • Himanshu Pant, Jayadeva, Sumit Soman
One of the issues faced in training Generative Adversarial Nets (GANs) and their variants is the problem of mode collapse, wherein the training stability in terms of the generative loss increases as more training data is used.
no code implementations • 31 Jul 2017 • Jayadeva, Himanshu Pant, Mayank Sharma, Abhimanyu Dubey, Sumit Soman, Suraj Tripathi, Sai Guruju, Nihal Goalla
Our proposed approach yields benefits across a wide range of architectures, in comparison to and in conjunction with methods such as Dropout and Batch Normalization, and our results strongly suggest that deep learning techniques can benefit from model complexity control methods such as the LCNN learning rule.
1 code implementation • 30 Apr 2017 • Jayadeva, Himanshu Pant, Sumit Soman, Mayank Sharma
In this paper, we discuss a Twin Neural Network (Twin NN) architecture for learning from large unbalanced datasets.