Activation Functions

General • 69 methods

Activation functions are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.

Subcategories

Method Year Papers
2000 8658
2016 6588
2000 5817
2000 5305
2014 1050
2016 503
2017 295
2017 227
2000 226
2019 200
2015 91
2017 68
2019 64
2013 48
2015 36
2000 23
2017 15
2000 12
2017 12
2021 9
2016 8
2017 6
2020 6
2020 5
2020 5
2020 4
2018 4
2015 4
2015 4
2020 3
2015 3
2021 3
2019 3
2023 2
2015 2
1994 2
2019 2
2019 2
2021 2
2000 2
2016 2
2018 2
2022 1
2023 1
2018 1
2023 1
2021 1
2020 1
2000 1
2023 1
2021 1
2022 1
2023 1
2023 1
2019 1
2000 1
2021 1
2020 1
2000 1
2018 1
2020 1
2020 1
2018 1
2023 1
2022 1
2022 1
1998 0
2020 0
2000 0