Search Results for author: Jayadeva

Found 24 papers, 1 papers with code

Scalable Twin Neural Networks for Classification of Unbalanced Data

1 code implementation30 Apr 2017 Jayadeva, Himanshu Pant, Sumit Soman, Mayank Sharma

In this paper, we discuss a Twin Neural Network (Twin NN) architecture for learning from large unbalanced datasets.

Classification General Classification

Learning Neural Network Classifiers with Low Model Complexity

no code implementations31 Jul 2017 Jayadeva, Himanshu Pant, Mayank Sharma, Abhimanyu Dubey, Sumit Soman, Suraj Tripathi, Sai Guruju, Nihal Goalla

Our proposed approach yields benefits across a wide range of architectures, in comparison to and in conjunction with methods such as Dropout and Batch Normalization, and our results strongly suggest that deep learning techniques can benefit from model complexity control methods such as the LCNN learning rule.

Examining Representational Similarity in ConvNets and the Primate Visual Cortex

no code implementations12 Sep 2016 Abhimanyu Dubey, Jayadeva, Sumeet Agarwal

We compare several ConvNets with different depth and regularization techniques with multi-unit macaque IT cortex recordings and assess the impact of the same on representational similarity with the primate visual cortex.

Benchmarking NLopt and state-of-art algorithms for Continuous Global Optimization via Hybrid IACO$_\mathbb{R}$

no code implementations11 Mar 2015 Udit Kumar, Sumit Soman, Jayadeva

This paper presents a comparative analysis of the performance of the Incremental Ant Colony algorithm for continuous optimization ($IACO_\mathbb{R}$), with different algorithms provided in the NLopt library.

Benchmarking

A Neurodynamical System for finding a Minimal VC Dimension Classifier

no code implementations11 Mar 2015 Jayadeva, Sumit Soman, Amit Bhaya

The recently proposed Minimal Complexity Machine (MCM) finds a hyperplane classifier by minimizing an exact bound on the Vapnik-Chervonenkis (VC) dimension.

Learning a Fuzzy Hyperplane Fat Margin Classifier with Minimum VC dimension

no code implementations11 Jan 2015 Jayadeva, Sanjit Singh Batra, Siddarth Sabharwal

The Vapnik-Chervonenkis (VC) dimension measures the complexity of a learning machine, and a low VC dimension leads to good generalization.

Feature Selection through Minimization of the VC dimension

no code implementations27 Oct 2014 Jayadeva, Sanjit S. Batra, Siddharth Sabharwal

For a linear hyperplane classifier in the input space, the VC dimension is upper bounded by the number of features; hence, a linear classifier with a small VC dimension is parsimonious in the set of features it employs.

Drug Discovery feature selection +1

Learning a hyperplane regressor by minimizing an exact bound on the VC dimension

no code implementations16 Oct 2014 Jayadeva, Suresh Chandra, Siddarth Sabharwal, Sanjit S. Batra

The capacity of a learning machine is measured by its Vapnik-Chervonenkis dimension, and learning machines with a low VC dimension generalize better.

Learning a hyperplane classifier by minimizing an exact bound on the VC dimension

no code implementations12 Aug 2014 Jayadeva

The VC dimension measures the capacity of a learning machine, and a low VC dimension leads to good generalization.

Radius-margin bounds for deep neural networks

no code implementations3 Nov 2018 Mayank Sharma, Jayadeva, Sumit Soman

Explaining the unreasonable effectiveness of deep learning has eluded researchers around the globe.

Effect of Various Regularizers on Model Complexities of Neural Networks in Presence of Input Noise

no code implementations31 Jan 2019 Mayank Sharma, Aayush Yadav, Sumit Soman, Jayadeva

We show that $L_2$ regularization leads to a simpler hypothesis class and better generalization followed by DARC1 regularizer, both for shallow as well as deeper architectures.

Smaller Models, Better Generalization

no code implementations29 Aug 2019 Mayank Sharma, Suraj Tripathi, Abhimanyu Dubey, Jayadeva, Sai Guruju, Nihal Goalla

Reducing network complexity has been a major research focus in recent years with the advent of mobile technology.

Quantization

Guided Random Forest and its application to data approximation

no code implementations2 Sep 2019 Prashant Gupta, Aashi Jindal, Jayadeva, Debarka Sengupta

We present a new way of constructing an ensemble classifier, named the Guided Random Forest (GRAF) in the sequel.

Enhash: A Fast Streaming Algorithm For Concept Drift Detection

no code implementations7 Nov 2020 Aashi Jindal, Prashant Gupta, Debarka Sengupta, Jayadeva

We propose Enhash, a fast ensemble learner that detects \textit{concept drift} in a data stream.

Complexity Controlled Generative Adversarial Networks

no code implementations20 Nov 2020 Himanshu Pant, Jayadeva, Sumit Soman

One of the issues faced in training Generative Adversarial Nets (GANs) and their variants is the problem of mode collapse, wherein the training stability in terms of the generative loss increases as more training data is used.

HybMT: Hybrid Meta-Predictor based ML Algorithm for Fast Test Vector Generation

no code implementations22 Jul 2022 Shruti Pandey, Jayadeva, Smruti R. Sarangi

As compared to a popular, state-of-the-art commercial ATPG tool, HybMT shows an overall reduction of 56. 6% in the CPU time without compromising on the fault coverage for the EPFL benchmark circuits.

Decision Making

Predicting Oxide Glass Properties with Low Complexity Neural Network and Physical and Chemical Descriptors

no code implementations19 Oct 2022 Suresh Bishnoi, Skyler Badge, Jayadeva, N. M. Anoop Krishnan

In addition, we combine the LCNN with physical and chemical descriptors that allow the development of universal models that can provide predictions for components beyond the training set.

Cementron: Machine Learning the Constituent Phases in Cement Clinker from Optical Images

no code implementations6 Nov 2022 Mohd Zaki, Siddhant Sharma, Sunil Kumar Gurjar, Raju Goyal, Jayadeva, N. M. Anoop Krishnan

Specifically, we finetune the image detection and segmentation model Detectron-2 on the cement microstructure to develop a model for detecting the cement phases, namely, Cementron.

Graph Neural Stochastic Differential Equations for Learning Brownian Dynamics

no code implementations20 Jun 2023 Suresh Bishnoi, Jayadeva, Sayan Ranu, N. M. Anoop Krishnan

Here, we propose a framework, namely Brownian graph neural networks (BROGNET), combining stochastic differential equations (SDEs) and GNNs to learn Brownian dynamics directly from the trajectory.

Discovering Symbolic Laws Directly from Trajectories with Hamiltonian Graph Neural Networks

no code implementations11 Jul 2023 Suresh Bishnoi, Ravinder Bhattoo, Jayadeva, Sayan Ranu, N M Anoop Krishnan

Here, we present a Hamiltonian graph neural network (HGNN), a physics-enforced GNN that learns the dynamics of systems directly from their trajectory.

Symbolic Regression

MaScQA: A Question Answering Dataset for Investigating Materials Science Knowledge of Large Language Models

no code implementations17 Aug 2023 Mohd Zaki, Jayadeva, Mausam, N. M. Anoop Krishnan

Further, we evaluate the performance of GPT-3. 5 and GPT-4 models on solving these questions via zero-shot and chain of thought prompting.

Question Answering

No prejudice! Fair Federated Graph Neural Networks for Personalized Recommendation

no code implementations10 Dec 2023 Nimesh Agrawal, Anuj Kumar Sirohi, Jayadeva, Sandeep Kumar

However, integrating these graph models into the Federated Learning (FL) paradigm with fairness constraints poses formidable challenges as this requires access to the entire interaction graph and sensitive user information (such as gender, age, etc.)

Fairness Federated Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.