Search Results for author: Tim Isbister

Found 9 papers, 2 papers with code

The Nordic Pile: A 1.2TB Nordic Dataset for Language Modeling

no code implementations30 Mar 2023 Joey Öhman, Severine Verlinden, Ariel Ekgren, Amaru Cuba Gyllensten, Tim Isbister, Evangelia Gogoulou, Fredrik Carlsson, Magnus Sahlgren

Pre-training Large Language Models (LLMs) require massive amounts of text data, and the performance of the LLMs typically correlates with the scale and quality of the datasets.

Language Modelling

Cross-lingual Transfer of Monolingual Models

no code implementations LREC 2022 Evangelia Gogoulou, Ariel Ekgren, Tim Isbister, Magnus Sahlgren

Additionally, the results of evaluating the transferred models in source language tasks reveal that their performance in the source domain deteriorates after transfer.

Cross-Lingual Transfer Domain Adaptation

Should we Stop Training More Monolingual Models, and Simply Use Machine Translation Instead?

1 code implementation NoDaLiDa 2021 Tim Isbister, Fredrik Carlsson, Magnus Sahlgren

We demonstrate empirically that a large English language model coupled with modern machine translation outperforms native language models in most Scandinavian languages.

Language Modelling Machine Translation +1

Automatic Extraction of Personality from Text: Challenges and Opportunities

no code implementations22 Oct 2019 Nazar Akrami, Johan Fernquist, Tim Isbister, Lisa Kaati, Björn Pelzer

Our results show that the models based on the small high-reliability dataset performed better (in terms of $\textrm{R}^2$) than models based on large low-reliability dataset.

Language Modelling

Dick-Preston and Morbo at SemEval-2019 Task 4: Transfer Learning for Hyperpartisan News Detection

no code implementations SEMEVAL 2019 Tim Isbister, Fredrik Johansson

In a world of information operations, influence campaigns, and fake news, classification of news articles as following hyperpartisan argumentation or not is becoming increasingly important.

Classification General Classification +3

Learning Representations for Detecting Abusive Language

no code implementations WS 2018 Magnus Sahlgren, Tim Isbister, Fredrik Olsson

This paper discusses the question whether it is possible to learn a generic representation that is useful for detecting various types of abusive language.

Abusive Language Language Modelling +4

Monitoring Targeted Hate in Online Environments

no code implementations13 Mar 2018 Tim Isbister, Magnus Sahlgren, Lisa Kaati, Milan Obaidi, Nazar Akrami

Hateful comments, swearwords and sometimes even death threats are becoming a reality for many people today in online environments.

Cannot find the paper you are looking for? You can Submit a new open access paper.