1 code implementation • 11 Feb 2024 • Santonu Sarkar, Shanay Mehta, Nicole Fernandes, Jyotirmoy Sarkar, Snehanshu Saha
The paper contributes significantly by conducting an unbiased comparison of various anomaly detection algorithms, spanning classical machine learning including various tree-based approaches to deep learning and outlier detection methods.
no code implementations • 7 Feb 2024 • Rahul Yedida, Snehanshu Saha
We propose a novel white-box approach to hyper-parameter optimization.
no code implementations • 3 Nov 2023 • Ashman Mehra, Snehanshu Saha, Vaskar Raychoudhury, Archana Mathur
Existing food delivery methods are sub-optimal because each delivery is individually optimized to go directly from the producer to the consumer via the shortest time path.
no code implementations • 2 Nov 2023 • Sourabh Patil, Archana Mathur, Raviprasad Aduri, Snehanshu Saha
The existing models to predict the pseudouridine sites in a given RNA sequence mainly depend on user-defined features such as mono and dinucleotide composition/propensities of RNA sequences.
no code implementations • 19 Aug 2023 • Rajan Sahu, Shivam Chadha, Nithin Nagaraj, Archana Mathur, Snehanshu Saha
Reducing the size of a neural network (pruning) by removing weights without impacting its performance is an important problem for resource-constrained devices.
no code implementations • 25 Apr 2023 • Aditya Challa, Snehanshu Saha, Soma Dhavala
The simultaneous quantile regression (SQR) technique has been used to estimate uncertainties for deep learning models, but its application is limited by the requirement that the solution at the median quantile ({\tau} = 0. 5) must minimize the mean absolute error (MAE).
no code implementations • 7 Apr 2023 • Pronoma Banerjee, Manasi V Gude, Rajvi J Sampat, Sharvari M Hedaoo, Soma Dhavala, Snehanshu Saha
The "ABC-GAN" framework introduced is a novel generative modeling paradigm, which combines Generative Adversarial Networks (GANs) and Approximate Bayesian Computation (ABC).
no code implementations • 17 Feb 2023 • Snehanshu Saha, Jyotirmoy Sarkar, Soma Dhavala, Santonu Sarkar, Preyank Mota
In particular, we propose Parametric Elliot Function (PEF) as an activation function (AF) inside LSTM, which saturates lately compared to sigmoid and tanh.
no code implementations • 9 Sep 2022 • Shashank Sanjay Bhat, Thiagaraj Prabu, Ben Stappers, Atul Ghalame, Snehanshu Saha, T. S. B Sudarshan, Zafiirah Hosenie
We have successfully demonstrated this algorithm by detecting candidate signatures on a simulation dataset.
no code implementations • 19 Jun 2022 • Aryaman Jeendgar, Tanmay Devale, Soma S Dhavala, Snehanshu Saha
We propose log-cosh as a smooth-alternative to the check loss.
no code implementations • 8 May 2022 • Omatharv Bharat Vaidya, Rithvik Terence DSouza, Snehanshu Saha, Soma Dhavala, Swagatam Das
We introduce the Hamiltonian Monte Carlo Particle Swarm Optimizer (HMC-PSO), an optimization algorithm that reaps the benefits of both Exponentially Averaged Momentum PSO and HMC sampling.
no code implementations • 6 Sep 2021 • Jyotirmoy Sarkar, Kartik Bhatia, Snehanshu Saha, Margarita Safonova, Santonu Sarkar
The algorithm is based on the postulate that Earth is an anomaly, with the possibility of existence of few other anomalies among thousands of data points.
1 code implementation • 10 Apr 2021 • Anwesh Bhattacharya, Snehanshu Saha, Nithin Nagaraj
It has been well documented that the use of exponentially-averaged momentum (EM) in particle swarm optimization (PSO) is advantageous over the vanilla PSO algorithm.
no code implementations • 10 Apr 2021 • Urvil Nileshbhai Jivani, Omatharv Bharat Vaidya, Anwesh Bhattacharya, Snehanshu Saha
This paper introduces application of the Exponentially Averaged Momentum Particle Swarm Optimization (EM-PSO) as a derivative-free optimizer for Neural Networks.
1 code implementation • 9 Feb 2021 • Anuj Tambwekar, Anirudh Maiya, Soma Dhavala, Snehanshu Saha
We quantify the uncertainty of the class probabilities in terms of prediction intervals, and develop individualized confidence scores that can be used to decide whether a prediction is reliable or not at scoring time.
1 code implementation • 23 Nov 2020 • Anwesh Bhattacharya, Nehal C. P., Mousumi Das, Abhishek Paswan, Snehanshu Saha, Francoise Combes
We present a novel algorithm to detect double nuclei galaxies (DNG) called GOTHIC (Graph BOosted iterated HIll Climbing) - that detects whether a given image of a galaxy has two or more closely separated nuclei.
2 code implementations • 19 May 2020 • Rohan Mohapatra, Snehanshu Saha, Carlos A. Coello Coello, Anwesh Bhattacharya, Soma S. Dhavala, Sriparna Saha
This paper introduces AdaSwarm, a novel gradient-free optimizer which has similar or even better performance than the Adam optimizer adopted in neural networks.
no code implementations • 19 May 2020 • Snehanshu Saha, Tejas Prashanth, Suraj Aralihalli, Sumedh Basarkod, T. S. B Sudarshan, Soma S. Dhavala
We propose a theoretical framework for an adaptive learning rate policy for the Mean Absolute Error loss function and Quantile loss function and evaluate its effectiveness for regression tasks.
1 code implementation • 18 May 2020 • Shailesh Sridhar, Snehanshu Saha, Azhar Shaikh, Rahul Yedida, Sriparna Saha
We leveraged the functional property of Mean Square Error, which is Lipschitz continuous to compute learning rate in shallow neural networks.
2 code implementations • 6 Oct 2019 • Harikrishnan Nellippallil Balakrishnan, Aditi Kathpalia, Snehanshu Saha, Nithin Nagaraj
Inspired by chaotic firing of neurons in the brain, we propose ChaosNet -- a novel chaos based artificial neural network architecture for classification tasks.
3 code implementations • 1 Jun 2019 • Snehanshu Saha, Nithin Nagaraj, Archana Mathur, Rahul Yedida
We present analytical exploration of novel activation functions as consequence of integration of several ideas leading to implementation and subsequent use in habitability classification of exoplanets.
5 code implementations • 20 Feb 2019 • Rahul Yedida, Snehanshu Saha, Tejas Prashanth
In this paper, we propose a novel method to compute the learning rate for training deep neural networks with stochastic gradient descent.
no code implementations • 6 Jun 2018 • Snehanshu Saha, Archana Mathur, Kakoli Bora, Surbhi Agrawal, Suryoday Basak
We explore the efficacy of using a novel activation function in Artificial Neural Networks (ANN) in characterizing exoplanets into different classes.
no code implementations • SEMEVAL 2018 • Manik R, an, Krishna Madgula, Snehanshu Saha
For subtask 1 We experimented with two category of word embeddings namely native embeddings and task specific embedding using Word2vec and Glove algorithms.
no code implementations • 13 Apr 2018 • Mohammed Viquar, Suryoday Basak, Ariruna Dasgupta, Surbhi Agrawal, Snehanshu Saha
We present the results of various automated classification methods, based on machine learning (ML), of objects from data releases 6 and 7 (DR6 and DR7) of the Sloan Digital Sky Survey (SDSS), primarily distinguishing stars from quasars.
3 code implementations • 29 Apr 2016 • Luckyson Khaidem, Snehanshu Saha, Sudeepa Roy Dey
In this paper, we propose a novel way to minimize the risk of investment in stock market by predicting the returns of a stock using a class of powerful machine learning algorithms known as ensemble learning.
no code implementations • 29 Apr 2015 • Snehanshu Saha, Surbhi Agrawal, Manikandan. R, Kakoli Bora, Swati Routh, Anand Narasimhamurthy
Astroinformatics is a new impact area in the world of astronomy, occasionally called the final frontier, where several astrophysicists, statisticians and computer scientists work together to tackle various data intensive astronomical problems.