Activation Functions

General • 75 methods

Activation functions are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.

Subcategories

Method Year Papers
2000 9803
2016 9120
2000 6535
2000 5878
2014 1244
2016 681
2017 358
2000 244
2017 226
2019 213
2015 105
2023 82
2017 80
2019 76
2020 66
2024 64
2013 48
2000 40
2015 39
2017 24
2018 16
2017 15
2000 12
2021 11
2016 9
2020 8
2020 8
2017 7
2020 6
2020 5
2015 5
2000 5
2015 4
2020 3
2015 3
2021 3
2019 3
2023 2
2015 2
1994 2
2019 2
2019 2
2022 2
2019 2
2021 2
2020 2
2016 2
2018 2
2020 2
2019 1
2023 1
2018 1
2023 1
2021 1
2000 1
2023 1
2021 1
2022 1
2023 1
2023 1
2000 1
2021 1
2000 1
2018 1
2020 1
2020 1
2018 1
2023 1
2022 1
2022 1
1998 0
2020 0
2000 0