Deeper Learning with CoLU Activation

18 Dec 2021  ·  Advait Vagerwal ·

In neural networks, non-linearity is introduced by activation functions. One commonly used activation function is Rectified Linear Unit (ReLU). ReLU has been a popular choice as an activation but has flaws. State-of-the-art functions like Swish and Mish are now gaining attention as a better choice as they combat many flaws presented by other activation functions. CoLU is an activation function similar to Swish and Mish in properties. It is defined as f(x)=x/(1-xe^-(x+e^x)). It is smooth, continuously differentiable, unbounded above, bounded below, non-saturating, and non-monotonic. Based on experiments done with CoLU with different activation functions, it is observed that CoLU usually performs better than other functions on deeper neural networks. While training different neural networks on MNIST on an incrementally increasing number of convolutional layers, CoLU retained the highest accuracy for more layers. On a smaller network with 8 convolutional layers, CoLU had the highest mean accuracy, closely followed by ReLU. On VGG-13 trained on Fashion-MNIST, CoLU had a 4.20% higher accuracy than Mish and 3.31% higher accuracy than ReLU. On ResNet-9 trained on Cifar-10, CoLU had 0.05% higher accuracy than Swish, 0.09% higher accuracy than Mish, and 0.29% higher accuracy than ReLU. It is observed that activation functions may behave better than other activation functions based on different factors including the number of layers, types of layers, number of parameters, learning rate, optimizer, etc. Further research can be done on these factors and activation functions for more optimal activation functions and more knowledge on their behavior.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods