AILAB-Udine@SMM4H 22: Limits of Transformers and BERT Ensembles

7 Sep 2022  ·  Beatrice Portelli, Simone Scaboro, Emmanuele Chersoni, Enrico Santus, Giuseppe Serra ·

This paper describes the models developed by the AILAB-Udine team for the SMM4H 22 Shared Task. We explored the limits of Transformer based models on text classification, entity extraction and entity normalization, tackling Tasks 1, 2, 5, 6 and 10. The main take-aways we got from participating in different tasks are: the overwhelming positive effects of combining different architectures when using ensemble learning, and the great potential of generative models for term normalization.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods