Search Results for author: Anwesh Bhattacharya

Found 6 papers, 3 papers with code

Efficient ML Models for Practical Secure Inference

no code implementations26 Aug 2022 Vinod Ganesan, Anwesh Bhattacharya, Pratyush Kumar, Divya Gupta, Rahul Sharma, Nishanth Chandran

For instance, the model provider could be a diagnostics company that has trained a state-of-the-art DenseNet-121 model for interpreting a chest X-ray and the user could be a patient at a hospital.

Encoding Involutory Invariances in Neural Networks

no code implementations7 Jun 2021 Anwesh Bhattacharya, Marios Mattheakis, Pavlos Protopapas

In certain situations, neural networks are trained upon data that obey underlying symmetries.

A Swarm Variant for the Schrödinger Solver

no code implementations10 Apr 2021 Urvil Nileshbhai Jivani, Omatharv Bharat Vaidya, Anwesh Bhattacharya, Snehanshu Saha

This paper introduces application of the Exponentially Averaged Momentum Particle Swarm Optimization (EM-PSO) as a derivative-free optimizer for Neural Networks.

Mathematical Proofs

Fairly Constricted Multi-Objective Particle Swarm Optimization

1 code implementation10 Apr 2021 Anwesh Bhattacharya, Snehanshu Saha, Nithin Nagaraj

It has been well documented that the use of exponentially-averaged momentum (EM) in particle swarm optimization (PSO) is advantageous over the vanilla PSO algorithm.

Fairness

Automated Detection of Double Nuclei Galaxies using GOTHIC and the Discovery of a Large Sample of Dual AGN

1 code implementation23 Nov 2020 Anwesh Bhattacharya, Nehal C. P., Mousumi Das, Abhishek Paswan, Snehanshu Saha, Francoise Combes

We present a novel algorithm to detect double nuclei galaxies (DNG) called GOTHIC (Graph BOosted iterated HIll Climbing) - that detects whether a given image of a galaxy has two or more closely separated nuclei.

AdaSwarm: Augmenting Gradient-Based optimizers in Deep Learning with Swarm Intelligence

2 code implementations19 May 2020 Rohan Mohapatra, Snehanshu Saha, Carlos A. Coello Coello, Anwesh Bhattacharya, Soma S. Dhavala, Sriparna Saha

This paper introduces AdaSwarm, a novel gradient-free optimizer which has similar or even better performance than the Adam optimizer adopted in neural networks.

Mathematical Proofs

Cannot find the paper you are looking for? You can Submit a new open access paper.