1 code implementation • SMM4H (COLING) 2020 • Zulfat Miftahutdinov, Andrey Sakhovskiy, Elena Tutubalina
The BERT-based multilingual model for classification of English and Russian tweets that report adverse reactions ranked second among 16 and 7 teams at two first subtasks of the SMM4H 2019 Task 2 and obtained a relaxed F1 of 58% on English tweets and 51% on Russian tweets.
no code implementations • NAACL (SMM4H) 2021 • Andrey Sakhovskiy, Zulfat Miftahutdinov, Elena Tutubalina
This paper describes neural models developed for the Social Media Mining for Health (SMM4H) 2021 Shared Task.
1 code implementation • 21 Oct 2022 • Andrey Sakhovskiy, Elena Tutubalina
These components are state-of-the-art BERT-based models for language understanding and molecular property prediction.
1 code implementation • 7 Apr 2020 • Elena Tutubalina, Ilseyar Alimova, Zulfat Miftahutdinov, Andrey Sakhovskiy, Valentin Malykh, Sergey Nikolenko
For the sentence classification task, our model achieves the macro F1 score of 68. 82% gaining 7. 47% over the score of BERT model trained on Russian data.