The growing body of political texts opens up new opportunities for rich insights into political dynamics and ideologies but also increases the workload for manual analysis.
In this contribution, we studied the ability of ensembles of fine-tuned GBERT and GPT-2-Wechsel models to reliably predict the readability of German sentences.
Our model's performance exceeds that of the state-of-the-art detector and of most experts in the MODA dataset.
Ranked #1 on Spindle Detection on MODA dataset
The availability of language representations learned by large pretrained neural network models (such as BERT and ELECTRA) has led to improvements in many downstream Natural Language Processing tasks in recent years.
When restricted to the classical stages, the optimized network showed state-of-the-art classification performance with an out-of-sample F1 score of 0. 95 in male C57BL/6J mice.
Sleep scoring is a necessary and time-consuming task in sleep studies.