Attention

General • 121 methods

Attention is a technique for attending to different parts of an input vector to capture long-term dependencies. Within the context of NLP, traditional sequence-to-sequence models compressed the input sequence to a fixed-length context vector, which hindered their ability to remember long inputs such as sentences. In contrast, attention creates shortcuts between the context vector and the entire source input. Below you will find a continuously updating list of attention based building blocks used in deep learning.

Subcategories

Method Year Papers
2017 11775
2017 11736
2019 599
2019 598
2017 208
2014 177
2017 165
2018 151
2015 149
2021 119
2017 96
2018 84
2014 69
2020 61
2022 60
2020 59
2020 59
2019 51
2020 48
2018 48
2019 47
2021 46
2020 43
2020 40
2017 32
2020 31
2018 31
2015 30
2014 30
2021 30
2019 29
2019 27
2021 27
2020 26
2015 26
2015 22
2020 22
2020 21
2015 19
2018 18
2019 17
2020 16
2020 14
2021 14
2018 13
2021 12
2019 12
2019 12
2020 12
2018 12
2020 12
2022 9
2020 9
2017 9
2021 8
2022 8
2019 7
2019 7
2019 7
2017 6
2018 6
2019 5
2019 5
2019 5
2020 4
2020 4
2018 4
2015 4
2021 4
2020 4
2021 3
2018 3
2021 3
2021 3
2020 3
2021 3
2018 3
2018 3
2020 3
2019 2
2020 2
2018 2
2
2017 2
2021 2
2021 2
2015 2
2016 2
2020 2
2016 2
2021 2
2017 2
2021 2
2021 2
2022 1
2020 1
2020 1
2020 1
2020 1
2018 1
2018 1
2020 1
2021 1
2022 1
2016 1
2019 1
2020 1
2021 1
2021 1
2020 1
2021 1
2022 1
2020 1
2019 1
2021 1
2020 1
2019 1
2020 1
2020 1
2020 1
2021 1
2020 1
2000 0