Search Results for author: Koushik Biswas

Found 15 papers, 5 papers with code

Adaptive Smooth Activation for Improved Disease Diagnosis and Organ Segmentation from Radiology Scans

no code implementations29 Nov 2023 Koushik Biswas, Debesh Jha, Nikhil Kumar Tomar, Gorkem Durak, Alpay Medetalibeyoglu, Matthew Antalek, Yury Velichko, Daniela Ladner, Amir Bohrani, Ulas Bagci

We apply this new activation function to two important and commonly used general tasks in medical image analysis: automatic disease diagnosis and organ segmentation in CT and MRI.

Image Classification Organ Segmentation +2

A Non-monotonic Smooth Activation Function

no code implementations16 Oct 2023 Koushik Biswas, Meghana Karri, Ulaş Bağcı

Activation functions are crucial in deep learning models since they introduce non-linearity into the networks, allowing them to learn from errors and make adjustments, which is essential for learning complex patterns.

Adversarial Attack Adversarial Robustness +3

Forecasting formation of a Tropical Cyclone Using Reanalysis Data

1 code implementation10 Dec 2022 Sandeep Kumar, Koushik Biswas, Ashish Kumar Pandey

In this study, a deep learning model has been proposed that can forecast the formation of a tropical cyclone with a lead time of up to 60 hours with high accuracy.

SMU: smooth activation function for deep networks using smoothing maximum technique

3 code implementations8 Nov 2021 Koushik Biswas, Sandeep Kumar, Shilpak Banerjee, Ashish Kumar Pandey

A good choice of activation function can have significant consequences in improving network performance.

SAU: Smooth activation function using convolution with approximate identities

no code implementations27 Sep 2021 Koushik Biswas, Sandeep Kumar, Shilpak Banerjee, Ashish Kumar Pandey

Well-known activation functions like ReLU or Leaky ReLU are non-differentiable at the origin.

ErfAct and Pserf: Non-monotonic Smooth Trainable Activation Functions

no code implementations9 Sep 2021 Koushik Biswas, Sandeep Kumar, Shilpak Banerjee, Ashish Kumar Pandey

An activation function is a crucial component of a neural network that introduces non-linearity in the network.

Intensity Prediction of Tropical Cyclones using Long Short-Term Memory Network

no code implementations7 Jul 2021 Koushik Biswas, Sandeep Kumar, Ashish Kumar Pandey

Therefore, the prediction of the intensity of tropical cyclones advance in time is of utmost importance.

Tropical cyclone intensity estimations over the Indian ocean using Machine Learning

no code implementations7 Jul 2021 Koushik Biswas, Sandeep Kumar, Ashish Kumar Pandey

We use multi-class classification models for the categorical outcome variable, cyclone grade, and regression models for MSWS as it is a continuous variable.

BIG-bench Machine Learning Multi-class Classification

Orthogonal-Padé Activation Functions: Trainable Activation functions for smooth and faster convergence in deep networks

no code implementations17 Jun 2021 Koushik Biswas, Shilpak Banerjee, Ashish Kumar Pandey

We have proposed orthogonal-Pad\'e activation functions, which are trainable activation functions and show that they have faster learning capability and improves the accuracy in standard deep learning datasets and models.

Prediction of Landfall Intensity, Location, and Time of a Tropical Cyclone

1 code implementation30 Mar 2021 Sandeep Kumar, Koushik Biswas, Ashish Kumar Pandey

The model takes as input the best track data of cyclone consisting of its location, pressure, sea surface temperature, and intensity for certain hours (from 12 to 36 hours) anytime during the course of the cyclone as a time series and then provide predictions with high accuracy.

Time Series Time Series Analysis

Predicting Landfall's Location and Time of a Tropical Cyclone Using Reanalysis Data

1 code implementation30 Mar 2021 Sandeep Kumar, Koushik Biswas, Ashish Kumar Pandey

Landfall of a tropical cyclone is the event when it moves over the land after crossing the coast of the ocean.

Weather Forecasting

EIS -- a family of activation functions combining Exponential, ISRU, and Softplus

no code implementations28 Sep 2020 Koushik Biswas, Sandeep Kumar, Shilpak Banerjee, Ashish Kumar Pandey

In recent years, several novel activation functions arising from these basic functions have been proposed, which have improved accuracy in some challenging datasets.

TanhSoft -- a family of activation functions combining Tanh and Softplus

no code implementations8 Sep 2020 Koushik Biswas, Sandeep Kumar, Shilpak Banerjee, Ashish Kumar Pandey

Deep learning at its core, contains functions that are composition of a linear transformation with a non-linear function known as activation function.

General Classification Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.