Search Results for author: Foteini Liwicki

Found 13 papers, 3 papers with code

Vector Representations of Idioms in Conversational Systems

no code implementations7 May 2022 Tosin Adewumi, Foteini Liwicki, Marcus Liwicki

We experiment with three instances of the SoTA dialogue model, Dialogue Generative Pre-trained Transformer (DialoGPT), for conversation generation.

Information Retrieval Machine Translation

State-of-the-art in Open-domain Conversational AI: A Survey

no code implementations2 May 2022 Tosin Adewumi, Foteini Liwicki, Marcus Liwicki

Results of the survey show that progress has been made with recent SoTA conversational AI, but there are still persistent challenges that need to be solved, and the female gender is more common than the male for conversational AI.

ML_LTU at SemEval-2022 Task 4: T5 Towards Identifying Patronizing and Condescending Language

no code implementations15 Apr 2022 Tosin Adewumi, Lama Alkhaled, Hamam Mokayed, Foteini Liwicki, Marcus Liwicki

This paper describes the system used by the Machine Learning Group of LTU in subtask 1 of the SemEval-2022 Task 4: Patronizing and Condescending Language (PCL) Detection.

HaT5: Hate Language Identification using Text-to-Text Transfer Transformer

no code implementations11 Feb 2022 Sana Sabah Sabry, Tosin Adewumi, Nosheen Abid, György Kovacs, Foteini Liwicki, Marcus Liwicki

We investigate the performance of a state-of-the art (SoTA) architecture T5 (available on the SuperGLUE) and compare with it 3 other previous SoTA architectures across 5 different tasks from 2 relatively diverse datasets.

Data Augmentation Explainable artificial intelligence +1

Småprat: DialoGPT for Natural Language Generation of Swedish Dialogue by Transfer Learning

no code implementations12 Oct 2021 Tosin Adewumi, Rickard Brännvall, Nosheen Abid, Maryam Pahlavan, Sana Sabah Sabry, Foteini Liwicki, Marcus Liwicki

Perplexity score (an automated intrinsic language model metric) and surveys by human evaluation were used to assess the performances of the fine-tuned models, with results that indicate that the capacity for transfer learning can be exploited with considerable success.

Chatbot Language Modelling +2

Spatiotemporal Spike-Pattern Selectivity in Single Mixed-Signal Neurons with Balanced Synapses

no code implementations10 Jun 2021 Mattias Nilsson, Foteini Liwicki, Fredrik Sandin

Realizing the potential of mixed-signal neuromorphic processors for ultra-low-power inference and learning requires efficient use of their inhomogeneous analog circuitry as well as sparse, time-based information encoding and processing.

Potential Idiomatic Expression (PIE)-English: Corpus for Classes of Idioms

1 code implementation25 Apr 2021 Tosin P. Adewumi, Roshanak Vadoodi, Aparajita Tripathy, Konstantina Nikolaidou, Foteini Liwicki, Marcus Liwicki

The challenges with NLP systems with regards to tasks such as Machine Translation (MT), word sense disambiguation (WSD) and information retrieval make it imperative to have a labelled idioms dataset with classes such as it is in this work.

Information Retrieval Machine Translation +3

The Challenge of Diacritics in Yoruba Embeddings

no code implementations15 Nov 2020 Tosin P. Adewumi, Foteini Liwicki, Marcus Liwicki

The major contributions of this work include the empirical establishment of a better performance for Yoruba embeddings from undiacritized (normalized) dataset and provision of new analogy sets for evaluation.

Corpora Compared: The Case of the Swedish Gigaword & Wikipedia Corpora

no code implementations6 Nov 2020 Tosin P. Adewumi, Foteini Liwicki, Marcus Liwicki

In this work, we show that the difference in performance of embeddings from differently sourced data for a given language can be due to other factors besides data size.

Exploring Swedish & English fastText Embeddings for NER with the Transformer

1 code implementation23 Jul 2020 Tosin P. Adewumi, Foteini Liwicki, Marcus Liwicki

To achieve a good network performance in natural language processing (NLP) downstream tasks, several factors play important roles: dataset size, the right hyper-parameters, and well-trained embeddings.

Named Entity Recognition NER

Synaptic Integration of Spatiotemporal Features with a Dynamic Neuromorphic Processor

no code implementations12 Feb 2020 Mattias Nilsson, Foteini Liwicki, Fredrik Sandin

Here, we investigate synaptic integration of spatiotemporal spike patterns with multiple dynamic synapses on point-neurons in the DYNAP-SE neuromorphic processor, which offers a complementary resource-efficient, albeit less flexible, approach to feature detection.

Cannot find the paper you are looking for? You can Submit a new open access paper.