Search Results for author: Jayadeva

Found 18 papers, 1 papers with code

Complexity Controlled Generative Adversarial Networks

no code implementations20 Nov 2020 Himanshu Pant, Jayadeva, Sumit Soman

One of the issues faced in training Generative Adversarial Nets (GANs) and their variants is the problem of mode collapse, wherein the training stability in terms of the generative loss increases as more training data is used.

Enhash: A Fast Streaming Algorithm For Concept Drift Detection

no code implementations7 Nov 2020 Aashi Jindal, Prashant Gupta, Debarka Sengupta, Jayadeva

We propose Enhash, a fast ensemble learner that detects \textit{concept drift} in a data stream.

Guided Random Forest and its application to data approximation

no code implementations2 Sep 2019 Prashant Gupta, Aashi Jindal, Jayadeva, Debarka Sengupta

We present a new way of constructing an ensemble classifier, named the Guided Random Forest (GRAF) in the sequel.

Smaller Models, Better Generalization

no code implementations29 Aug 2019 Mayank Sharma, Suraj Tripathi, Abhimanyu Dubey, Jayadeva, Sai Guruju, Nihal Goalla

Reducing network complexity has been a major research focus in recent years with the advent of mobile technology.

Quantization

Effect of Various Regularizers on Model Complexities of Neural Networks in Presence of Input Noise

no code implementations31 Jan 2019 Mayank Sharma, Aayush Yadav, Sumit Soman, Jayadeva

We show that $L_2$ regularization leads to a simpler hypothesis class and better generalization followed by DARC1 regularizer, both for shallow as well as deeper architectures.

Radius-margin bounds for deep neural networks

no code implementations3 Nov 2018 Mayank Sharma, Jayadeva, Sumit Soman

Explaining the unreasonable effectiveness of deep learning has eluded researchers around the globe.

Learning Neural Network Classifiers with Low Model Complexity

no code implementations31 Jul 2017 Jayadeva, Himanshu Pant, Mayank Sharma, Abhimanyu Dubey, Sumit Soman, Suraj Tripathi, Sai Guruju, Nihal Goalla

Our proposed approach yields benefits across a wide range of architectures, in comparison to and in conjunction with methods such as Dropout and Batch Normalization, and our results strongly suggest that deep learning techniques can benefit from model complexity control methods such as the LCNN learning rule.

Scalable Twin Neural Networks for Classification of Unbalanced Data

1 code implementation30 Apr 2017 Jayadeva, Himanshu Pant, Sumit Soman, Mayank Sharma

In this paper, we discuss a Twin Neural Network (Twin NN) architecture for learning from large unbalanced datasets.

Classification General Classification

Examining Representational Similarity in ConvNets and the Primate Visual Cortex

no code implementations12 Sep 2016 Abhimanyu Dubey, Jayadeva, Sumeet Agarwal

We compare several ConvNets with different depth and regularization techniques with multi-unit macaque IT cortex recordings and assess the impact of the same on representational similarity with the primate visual cortex.

A Neurodynamical System for finding a Minimal VC Dimension Classifier

no code implementations11 Mar 2015 Jayadeva, Sumit Soman, Amit Bhaya

The recently proposed Minimal Complexity Machine (MCM) finds a hyperplane classifier by minimizing an exact bound on the Vapnik-Chervonenkis (VC) dimension.

Benchmarking NLopt and state-of-art algorithms for Continuous Global Optimization via Hybrid IACO$_\mathbb{R}$

no code implementations11 Mar 2015 Udit Kumar, Sumit Soman, Jayadeva

This paper presents a comparative analysis of the performance of the Incremental Ant Colony algorithm for continuous optimization ($IACO_\mathbb{R}$), with different algorithms provided in the NLopt library.

Learning a Fuzzy Hyperplane Fat Margin Classifier with Minimum VC dimension

no code implementations11 Jan 2015 Jayadeva, Sanjit Singh Batra, Siddarth Sabharwal

The Vapnik-Chervonenkis (VC) dimension measures the complexity of a learning machine, and a low VC dimension leads to good generalization.

Feature Selection through Minimization of the VC dimension

no code implementations27 Oct 2014 Jayadeva, Sanjit S. Batra, Siddharth Sabharwal

For a linear hyperplane classifier in the input space, the VC dimension is upper bounded by the number of features; hence, a linear classifier with a small VC dimension is parsimonious in the set of features it employs.

Drug Discovery Variable Selection

Learning a hyperplane regressor by minimizing an exact bound on the VC dimension

no code implementations16 Oct 2014 Jayadeva, Suresh Chandra, Siddarth Sabharwal, Sanjit S. Batra

The capacity of a learning machine is measured by its Vapnik-Chervonenkis dimension, and learning machines with a low VC dimension generalize better.

Learning a hyperplane classifier by minimizing an exact bound on the VC dimension

no code implementations12 Aug 2014 Jayadeva

The VC dimension measures the capacity of a learning machine, and a low VC dimension leads to good generalization.

Cannot find the paper you are looking for? You can Submit a new open access paper.