Search Results for author: David Ciprut

Found 2 papers, 2 papers with code

Memory-efficient Transformers via Top-k Attention

1 code implementation EMNLP (sustainlp) 2021 Ankit Gupta, Guy Dar, Shaya Goodman, David Ciprut, Jonathan Berant

Following the success of dot-product attention in Transformers, numerous approximations have been recently proposed to address its quadratic complexity with respect to the input length.

Memory-efficient Transformers via Top-$k$ Attention

1 code implementation13 Jun 2021 Ankit Gupta, Guy Dar, Shaya Goodman, David Ciprut, Jonathan Berant

Following the success of dot-product attention in Transformers, numerous approximations have been recently proposed to address its quadratic complexity with respect to the input length.

Cannot find the paper you are looking for? You can Submit a new open access paper.