1 code implementation • 16 Jan 2023 • Ahmed Elnaggar, Hazem Essam, Wafaa Salah-Eldin, Walid Moustafa, Mohamed Elkerdawy, Charlotte Rochereau, Burkhard Rost
As opposed to scaling-up protein language models (PLMs), we seek improving performance via protein-specific optimization.
1 code implementation • 6 Apr 2021 • Ahmed Elnaggar, Wei Ding, Llion Jones, Tom Gibbs, Tamas Feher, Christoph Angerer, Silvia Severini, Florian Matthes, Burkhard Rost
Simultaneously, the transformer model, especially its combination with transfer learning, has been proven to be a powerful technique for natural language processing tasks.
1 code implementation • 13 Jul 2020 • Ahmed Elnaggar, Michael Heinzinger, Christian Dallago, Ghalia Rihawi, Yu Wang, Llion Jones, Tom Gibbs, Tamas Feher, Christoph Angerer, Martin Steinegger, Debsindhu Bhowmik, Burkhard Rost
Here, we trained two auto-regressive models (Transformer-XL, XLNet) and four auto-encoder models (BERT, Albert, Electra, T5) on data from UniRef and BFD containing up to 393 billion amino acids.
Ranked #1 on Protein Secondary Structure Prediction on CASP12
Dimensionality Reduction Protein Secondary Structure Prediction
no code implementations • 16 Oct 2018 • Ahmed Elnaggar, Christoph Gebendorfer, Ingo Glaser, Florian Matthes
A potential solution to this problem is multi-task DL to enable transfer learning.
no code implementations • 15 Oct 2018 • Ahmed Elnaggar, Bernhard Waltl, Ingo Glaser, Jörg Landthaler, Elena Scepankova, Florian Matthes
Deep learning methods are often difficult to apply in the legal domain due to the large amount of labeled data required by deep learning methods.
no code implementations • 15 Oct 2018 • Ahmed Elnaggar, Robin Otto, Florian Matthes
However, this paper focuses especially on the aspect of applying transfer learning approach using networks trained for NEL to legal documents.