Sigmoid Linear Units, or SiLUs, are activation functions for neural networks. The activation of the SiLU is computed by the sigmoid function multiplied by its input, or $$ x\sigma(x).$$
See Gaussian Error Linear Units (GELUs) where the SiLU was originally coined, and see Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning and Swish: a Self-Gated Activation Function where the SiLU was experimented with later.
Source: Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement LearningPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Classification | 5 | 31.25% |
Instance Segmentation | 2 | 12.50% |
Object Detection | 2 | 12.50% |
Graph Attention | 1 | 6.25% |
Autonomous Driving | 1 | 6.25% |
Activation Function Synthesis | 1 | 6.25% |
Learning Theory | 1 | 6.25% |
Semantic Segmentation | 1 | 6.25% |
Atari Games | 1 | 6.25% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |