Search Results for author: Stefan Holban

Found 2 papers, 0 papers with code

P-Swish: Activation Function with Learnable Parameters Based on Swish Activation Function in Deep Learning

no code implementations1 Jan 2021 Marina Adriana Mercioni, Stefan Holban

In order to improve the performance of a deep neural network, the activation function is an important aspect that we must research continuously, that is why we have expanded the research in this direction.

Image Classification Transfer Learning

TeLU: A New Activation Function for Deep Learning

no code implementations1 Jan 2021 Marina Adriana Mercioni, Stefan Holban

In this paper we proposed two novel activation functions, which we called them TeLU and TeLU learnable.

Cannot find the paper you are looking for? You can Submit a new open access paper.