no code implementations • 7 May 2022 • Tosin Adewumi, Foteini Liwicki, Marcus Liwicki
We experiment with three instances of the SoTA dialogue model, Dialogue Generative Pre-trained Transformer (DialoGPT), for conversation generation.
no code implementations • 2 May 2022 • Tosin Adewumi, Foteini Liwicki, Marcus Liwicki
Results of the survey show that progress has been made with recent SoTA conversational AI, but there are still persistent challenges that need to be solved, and the female gender is more common than the male for conversational AI.
no code implementations • 17 Apr 2022 • Tosin Adewumi, Mofetoluwa Adeyemi, Aremu Anuoluwapo, Bukola Peters, Happy Buzaaba, Oyerinde Samuel, Amina Mardiyyah Rufai, Benjamin Ajibade, Tajudeen Gwadabe, Mory Moussou Koulibaly Traore, Tunde Ajayi, Shamsuddeen Muhammad, Ahmed Baruwa, Paul Owoicho, Tolulope Ogunremi, Phylis Ngigi, Orevaoghene Ahia, Ruqayya Nasir, Foteini Liwicki, Marcus Liwicki
The language with the most transferable properties is the Nigerian Pidgin English, with a human-likeness score of 78. 1%, of which 34. 4% are unanimous.
no code implementations • 15 Apr 2022 • Tosin Adewumi, Lama Alkhaled, Hamam Mokayed, Foteini Liwicki, Marcus Liwicki
This paper describes the system used by the Machine Learning Group of LTU in subtask 1 of the SemEval-2022 Task 4: Patronizing and Condescending Language (PCL) Detection.
no code implementations • 11 Feb 2022 • Sana Sabah Sabry, Tosin Adewumi, Nosheen Abid, György Kovacs, Foteini Liwicki, Marcus Liwicki
We investigate the performance of a state-of-the art (SoTA) architecture T5 (available on the SuperGLUE) and compare with it 3 other previous SoTA architectures across 5 different tasks from 2 relatively diverse datasets.
no code implementations • 12 Oct 2021 • Tosin Adewumi, Rickard Brännvall, Nosheen Abid, Maryam Pahlavan, Sana Sabah Sabry, Foteini Liwicki, Marcus Liwicki
Perplexity score (an automated intrinsic language model metric) and surveys by human evaluation were used to assess the performances of the fine-tuned models, with results that indicate that the capacity for transfer learning can be exploited with considerable success.
no code implementations • 10 Jun 2021 • Mattias Nilsson, Foteini Liwicki, Fredrik Sandin
Realizing the potential of mixed-signal neuromorphic processors for ultra-low-power inference and learning requires efficient use of their inhomogeneous analog circuitry as well as sparse, time-based information encoding and processing.
1 code implementation • 25 Apr 2021 • Tosin P. Adewumi, Roshanak Vadoodi, Aparajita Tripathy, Konstantina Nikolaidou, Foteini Liwicki, Marcus Liwicki
The challenges with NLP systems with regards to tasks such as Machine Translation (MT), word sense disambiguation (WSD) and information retrieval make it imperative to have a labelled idioms dataset with classes such as it is in this work.
no code implementations • 15 Nov 2020 • Tosin P. Adewumi, Foteini Liwicki, Marcus Liwicki
The major contributions of this work include the empirical establishment of a better performance for Yoruba embeddings from undiacritized (normalized) dataset and provision of new analogy sets for evaluation.
no code implementations • 6 Nov 2020 • Tosin P. Adewumi, Foteini Liwicki, Marcus Liwicki
In this work, we show that the difference in performance of embeddings from differently sourced data for a given language can be due to other factors besides data size.
1 code implementation • 23 Jul 2020 • Tosin P. Adewumi, Foteini Liwicki, Marcus Liwicki
To achieve a good network performance in natural language processing (NLP) downstream tasks, several factors play important roles: dataset size, the right hyper-parameters, and well-trained embeddings.
2 code implementations • 23 Mar 2020 • Tosin P. Adewumi, Foteini Liwicki, Marcus Liwicki
However, wrong combination of hyper-parameters can produce poor quality vectors.
no code implementations • 12 Feb 2020 • Mattias Nilsson, Foteini Liwicki, Fredrik Sandin
Here, we investigate synaptic integration of spatiotemporal spike patterns with multiple dynamic synapses on point-neurons in the DYNAP-SE neuromorphic processor, which offers a complementary resource-efficient, albeit less flexible, approach to feature detection.