Search Results for author: Snehanshu Saha

Found 27 papers, 10 papers with code

Can Tree Based Approaches Surpass Deep Learning in Anomaly Detection? A Benchmarking Study

1 code implementation11 Feb 2024 Santonu Sarkar, Shanay Mehta, Nicole Fernandes, Jyotirmoy Sarkar, Snehanshu Saha

The paper contributes significantly by conducting an unbiased comparison of various anomaly detection algorithms, spanning classical machine learning including various tree-based approaches to deep learning and outlier detection methods.

Anomaly Detection Benchmarking +2

Strong convexity-guided hyper-parameter optimization for flatter losses

no code implementations7 Feb 2024 Rahul Yedida, Snehanshu Saha

We propose a novel white-box approach to hyper-parameter optimization.

DeliverAI: Reinforcement Learning Based Distributed Path-Sharing Network for Food Deliveries

no code implementations3 Nov 2023 Ashman Mehra, Snehanshu Saha, Vaskar Raychoudhury, Archana Mathur

Existing food delivery methods are sub-optimal because each delivery is individually optimized to go directly from the producer to the consumer via the shortest time path.

Decision Making reinforcement-learning

A novel RNA pseudouridine site prediction model using Utility Kernel and data-driven parameters

no code implementations2 Nov 2023 Sourabh Patil, Archana Mathur, Raviprasad Aduri, Snehanshu Saha

The existing models to predict the pseudouridine sites in a given RNA sequence mainly depend on user-defined features such as mono and dinucleotide composition/propensities of RNA sequences.

To prune or not to prune : A chaos-causality approach to principled pruning of dense neural networks

no code implementations19 Aug 2023 Rajan Sahu, Shivam Chadha, Nithin Nagaraj, Archana Mathur, Snehanshu Saha

Reducing the size of a neural network (pruning) by removing weights without impacting its performance is an important problem for resource-constrained devices.

Network Pruning

Decoupling Quantile Representations from Loss Functions

no code implementations25 Apr 2023 Aditya Challa, Snehanshu Saha, Soma Dhavala

The simultaneous quantile regression (SQR) technique has been used to estimate uncertainties for deep learning models, but its application is limited by the requirement that the solution at the median quantile ({\tau} = 0. 5) must minimize the mean absolute error (MAE).

regression

Correcting Model Misspecification via Generative Adversarial Networks

no code implementations7 Apr 2023 Pronoma Banerjee, Manasi V Gude, Rajvi J Sampat, Sharvari M Hedaoo, Soma Dhavala, Snehanshu Saha

The "ABC-GAN" framework introduced is a novel generative modeling paradigm, which combines Generative Adversarial Networks (GANs) and Approximate Bayesian Computation (ABC).

Quantile LSTM: A Robust LSTM for Anomaly Detection In Time Series Data

no code implementations17 Feb 2023 Snehanshu Saha, Jyotirmoy Sarkar, Soma Dhavala, Santonu Sarkar, Preyank Mota

In particular, we propose Parametric Elliot Function (PEF) as an activation function (AF) inside LSTM, which saturates lately compared to sigmoid and tanh.

Anomaly Detection Time Series +1

Hamiltonian Monte Carlo Particle Swarm Optimizer

no code implementations8 May 2022 Omatharv Bharat Vaidya, Rithvik Terence DSouza, Snehanshu Saha, Soma Dhavala, Swagatam Das

We introduce the Hamiltonian Monte Carlo Particle Swarm Optimizer (HMC-PSO), an optimization algorithm that reaps the benefits of both Exponentially Averaged Momentum PSO and HMC sampling.

Position

Postulating Exoplanetary Habitability via a Novel Anomaly Detection Method

no code implementations6 Sep 2021 Jyotirmoy Sarkar, Kartik Bhatia, Snehanshu Saha, Margarita Safonova, Santonu Sarkar

The algorithm is based on the postulate that Earth is an anomaly, with the possibility of existence of few other anomalies among thousands of data points.

Anomaly Detection Clustering

Fairly Constricted Multi-Objective Particle Swarm Optimization

1 code implementation10 Apr 2021 Anwesh Bhattacharya, Snehanshu Saha, Nithin Nagaraj

It has been well documented that the use of exponentially-averaged momentum (EM) in particle swarm optimization (PSO) is advantageous over the vanilla PSO algorithm.

Fairness

A Swarm Variant for the Schrödinger Solver

no code implementations10 Apr 2021 Urvil Nileshbhai Jivani, Omatharv Bharat Vaidya, Anwesh Bhattacharya, Snehanshu Saha

This paper introduces application of the Exponentially Averaged Momentum Particle Swarm Optimization (EM-PSO) as a derivative-free optimizer for Neural Networks.

Mathematical Proofs

Estimation and Applications of Quantiles in Deep Binary Classification

1 code implementation9 Feb 2021 Anuj Tambwekar, Anirudh Maiya, Soma Dhavala, Snehanshu Saha

We quantify the uncertainty of the class probabilities in terms of prediction intervals, and develop individualized confidence scores that can be used to decide whether a prediction is reliable or not at scoring time.

Binary Classification Classification +4

Automated Detection of Double Nuclei Galaxies using GOTHIC and the Discovery of a Large Sample of Dual AGN

1 code implementation23 Nov 2020 Anwesh Bhattacharya, Nehal C. P., Mousumi Das, Abhishek Paswan, Snehanshu Saha, Francoise Combes

We present a novel algorithm to detect double nuclei galaxies (DNG) called GOTHIC (Graph BOosted iterated HIll Climbing) - that detects whether a given image of a galaxy has two or more closely separated nuclei.

LALR: Theoretical and Experimental validation of Lipschitz Adaptive Learning Rate in Regression and Neural Networks

no code implementations19 May 2020 Snehanshu Saha, Tejas Prashanth, Suraj Aralihalli, Sumedh Basarkod, T. S. B Sudarshan, Soma S. Dhavala

We propose a theoretical framework for an adaptive learning rate policy for the Mean Absolute Error loss function and Quantile loss function and evaluate its effectiveness for regression tasks.

regression

AdaSwarm: Augmenting Gradient-Based optimizers in Deep Learning with Swarm Intelligence

2 code implementations19 May 2020 Rohan Mohapatra, Snehanshu Saha, Carlos A. Coello Coello, Anwesh Bhattacharya, Soma S. Dhavala, Sriparna Saha

This paper introduces AdaSwarm, a novel gradient-free optimizer which has similar or even better performance than the Adam optimizer adopted in neural networks.

Mathematical Proofs

Parsimonious Computing: A Minority Training Regime for Effective Prediction in Large Microarray Expression Data Sets

1 code implementation18 May 2020 Shailesh Sridhar, Snehanshu Saha, Azhar Shaikh, Rahul Yedida, Sriparna Saha

We leveraged the functional property of Mean Square Error, which is Lipschitz continuous to compute learning rate in shallow neural networks.

ChaosNet: A Chaos based Artificial Neural Network Architecture for Classification

2 code implementations6 Oct 2019 Harikrishnan Nellippallil Balakrishnan, Aditi Kathpalia, Snehanshu Saha, Nithin Nagaraj

Inspired by chaotic firing of neurons in the brain, we propose ChaosNet -- a novel chaos based artificial neural network architecture for classification tasks.

Classification General Classification

Evolution of Novel Activation Functions in Neural Network Training with Applications to Classification of Exoplanets

3 code implementations1 Jun 2019 Snehanshu Saha, Nithin Nagaraj, Archana Mathur, Rahul Yedida

We present analytical exploration of novel activation functions as consequence of integration of several ideas leading to implementation and subsequent use in habitability classification of exoplanets.

General Classification

LipschitzLR: Using theoretically computed adaptive learning rates for fast convergence

5 code implementations20 Feb 2019 Rahul Yedida, Snehanshu Saha, Tejas Prashanth

In this paper, we propose a novel method to compute the learning rate for training deep neural networks with stochastic gradient descent.

Handwritten Digit Recognition Object Detection

SBAF: A New Activation Function for Artificial Neural Net based Habitability Classification

no code implementations6 Jun 2018 Snehanshu Saha, Archana Mathur, Kakoli Bora, Surbhi Agrawal, Suryoday Basak

We explore the efficacy of using a novel activation function in Artificial Neural Networks (ANN) in characterizing exoplanets into different classes.

BIG-bench Machine Learning General Classification

TeamDL at SemEval-2018 Task 8: Cybersecurity Text Analysis using Convolutional Neural Network and Conditional Random Fields

no code implementations SEMEVAL 2018 Manik R, an, Krishna Madgula, Snehanshu Saha

For subtask 1 We experimented with two category of word embeddings namely native embeddings and task specific embedding using Word2vec and Glove algorithms.

General Classification LEMMA +5

Machine Learning in Astronomy: A Case Study in Quasar-Star Classification

no code implementations13 Apr 2018 Mohammed Viquar, Suryoday Basak, Ariruna Dasgupta, Surbhi Agrawal, Snehanshu Saha

We present the results of various automated classification methods, based on machine learning (ML), of objects from data releases 6 and 7 (DR6 and DR7) of the Sloan Digital Sky Survey (SDSS), primarily distinguishing stars from quasars.

Astronomy BIG-bench Machine Learning +1

Predicting the direction of stock market prices using random forest

3 code implementations29 Apr 2016 Luckyson Khaidem, Snehanshu Saha, Sudeepa Roy Dey

In this paper, we propose a novel way to minimize the risk of investment in stock market by predicting the returns of a stock using a class of powerful machine learning algorithms known as ensemble learning.

BIG-bench Machine Learning Ensemble Learning

ASTROMLSKIT: A New Statistical Machine Learning Toolkit: A Platform for Data Analytics in Astronomy

no code implementations29 Apr 2015 Snehanshu Saha, Surbhi Agrawal, Manikandan. R, Kakoli Bora, Swati Routh, Anand Narasimhamurthy

Astroinformatics is a new impact area in the world of astronomy, occasionally called the final frontier, where several astrophysicists, statisticians and computer scientists work together to tackle various data intensive astronomical problems.

Astronomy BIG-bench Machine Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.