no code implementations • 25 May 2021 • Lee Burke, Karl Pazdernik, Daniel Fortin, Benjamin Wilson, Rustam Goychayev, John Mattingly
The BERT architecture has shown even better performance on domain-specific tasks when the model is pre-trained using domain-relevant texts.