1 code implementation • 30 Apr 2017 • Jayadeva, Himanshu Pant, Sumit Soman, Mayank Sharma
In this paper, we discuss a Twin Neural Network (Twin NN) architecture for learning from large unbalanced datasets.
no code implementations • 31 Jul 2017 • Jayadeva, Himanshu Pant, Mayank Sharma, Abhimanyu Dubey, Sumit Soman, Suraj Tripathi, Sai Guruju, Nihal Goalla
Our proposed approach yields benefits across a wide range of architectures, in comparison to and in conjunction with methods such as Dropout and Batch Normalization, and our results strongly suggest that deep learning techniques can benefit from model complexity control methods such as the LCNN learning rule.
no code implementations • 12 Sep 2016 • Abhimanyu Dubey, Jayadeva, Sumeet Agarwal
We compare several ConvNets with different depth and regularization techniques with multi-unit macaque IT cortex recordings and assess the impact of the same on representational similarity with the primate visual cortex.
no code implementations • 27 Sep 2015 • Phool Preet, Sanjit Singh Batra, Jayadeva
Hyperspectral data consists of large number of features which require sophisticated analysis to be extracted.
no code implementations • 11 Mar 2015 • Udit Kumar, Sumit Soman, Jayadeva
This paper presents a comparative analysis of the performance of the Incremental Ant Colony algorithm for continuous optimization ($IACO_\mathbb{R}$), with different algorithms provided in the NLopt library.
no code implementations • 11 Mar 2015 • Jayadeva, Sumit Soman, Amit Bhaya
The recently proposed Minimal Complexity Machine (MCM) finds a hyperplane classifier by minimizing an exact bound on the Vapnik-Chervonenkis (VC) dimension.
no code implementations • 11 Jan 2015 • Jayadeva, Sanjit Singh Batra, Siddarth Sabharwal
The Vapnik-Chervonenkis (VC) dimension measures the complexity of a learning machine, and a low VC dimension leads to good generalization.
no code implementations • 27 Oct 2014 • Jayadeva, Sanjit S. Batra, Siddharth Sabharwal
For a linear hyperplane classifier in the input space, the VC dimension is upper bounded by the number of features; hence, a linear classifier with a small VC dimension is parsimonious in the set of features it employs.
no code implementations • 16 Oct 2014 • Jayadeva, Suresh Chandra, Siddarth Sabharwal, Sanjit S. Batra
The capacity of a learning machine is measured by its Vapnik-Chervonenkis dimension, and learning machines with a low VC dimension generalize better.
no code implementations • 12 Aug 2014 • Jayadeva
The VC dimension measures the capacity of a learning machine, and a low VC dimension leads to good generalization.
no code implementations • 3 Nov 2018 • Mayank Sharma, Jayadeva, Sumit Soman
Explaining the unreasonable effectiveness of deep learning has eluded researchers around the globe.
no code implementations • 31 Jan 2019 • Mayank Sharma, Aayush Yadav, Sumit Soman, Jayadeva
We show that $L_2$ regularization leads to a simpler hypothesis class and better generalization followed by DARC1 regularizer, both for shallow as well as deeper architectures.
no code implementations • 29 Aug 2019 • Mayank Sharma, Suraj Tripathi, Abhimanyu Dubey, Jayadeva, Sai Guruju, Nihal Goalla
Reducing network complexity has been a major research focus in recent years with the advent of mobile technology.
no code implementations • 2 Sep 2019 • Prashant Gupta, Aashi Jindal, Jayadeva, Debarka Sengupta
We present a new way of constructing an ensemble classifier, named the Guided Random Forest (GRAF) in the sequel.
no code implementations • 7 Nov 2020 • Aashi Jindal, Prashant Gupta, Debarka Sengupta, Jayadeva
We propose Enhash, a fast ensemble learner that detects \textit{concept drift} in a data stream.
no code implementations • 20 Nov 2020 • Himanshu Pant, Jayadeva, Sumit Soman
One of the issues faced in training Generative Adversarial Nets (GANs) and their variants is the problem of mode collapse, wherein the training stability in terms of the generative loss increases as more training data is used.
no code implementations • 16 Feb 2021 • Kartikeya Badola, Sameer Ambekar, Himanshu Pant, Sumit Soman, Anuradha Sural, Rajiv Narang, Suresh Chandra, Jayadeva
We show that popular choices of dataset selection suffer from data homogeneity, leading to misleading results.
no code implementations • 22 Jul 2022 • Shruti Pandey, Jayadeva, Smruti R. Sarangi
As compared to a popular, state-of-the-art commercial ATPG tool, HybMT shows an overall reduction of 56. 6% in the CPU time without compromising on the fault coverage for the EPFL benchmark circuits.
no code implementations • 19 Oct 2022 • Suresh Bishnoi, Skyler Badge, Jayadeva, N. M. Anoop Krishnan
In addition, we combine the LCNN with physical and chemical descriptors that allow the development of universal models that can provide predictions for components beyond the training set.
no code implementations • 6 Nov 2022 • Mohd Zaki, Siddhant Sharma, Sunil Kumar Gurjar, Raju Goyal, Jayadeva, N. M. Anoop Krishnan
Specifically, we finetune the image detection and segmentation model Detectron-2 on the cement microstructure to develop a model for detecting the cement phases, namely, Cementron.
no code implementations • 20 Jun 2023 • Suresh Bishnoi, Jayadeva, Sayan Ranu, N. M. Anoop Krishnan
Here, we propose a framework, namely Brownian graph neural networks (BROGNET), combining stochastic differential equations (SDEs) and GNNs to learn Brownian dynamics directly from the trajectory.
no code implementations • 11 Jul 2023 • Suresh Bishnoi, Ravinder Bhattoo, Jayadeva, Sayan Ranu, N M Anoop Krishnan
Here, we present a Hamiltonian graph neural network (HGNN), a physics-enforced GNN that learns the dynamics of systems directly from their trajectory.
no code implementations • 17 Aug 2023 • Mohd Zaki, Jayadeva, Mausam, N. M. Anoop Krishnan
Further, we evaluate the performance of GPT-3. 5 and GPT-4 models on solving these questions via zero-shot and chain of thought prompting.
no code implementations • 10 Dec 2023 • Nimesh Agrawal, Anuj Kumar Sirohi, Jayadeva, Sandeep Kumar
However, integrating these graph models into the Federated Learning (FL) paradigm with fairness constraints poses formidable challenges as this requires access to the entire interaction graph and sensitive user information (such as gender, age, etc.)