Search Results for author: Fatiha Sadat

Found 35 papers, 1 papers with code

On the Hidden Negative Transfer in Sequential Transfer Learning for Domain Adaptation from News to Tweets

no code implementations EACL (AdaptNLP) 2021 Sara Meftah, Nasredine Semmar, Youssef Tamaazousti, Hassane Essafi, Fatiha Sadat

Transfer Learning has been shown to be a powerful tool for Natural Language Processing (NLP) and has outperformed the standard supervised learning paradigm, as it takes benefit from the pre-learned knowledge.

Chunking Domain Adaptation +5

Revitalisation des langues autochtones via le prétraitement et la traduction automatique neuronale: le cas de l’inuktitut (Revitalization and Preservation of Indigenous Languages through Natural Language Processing )

no code implementations JEP/TALN/RECITAL 2021 Tan Le Ngoc, Fatiha Sadat

Nous présentons des résumés en français et en anglais de l’article (Tan Le & Sadat, 2020) présenté à la 28ème conférence internationale sur les linguistiques computationnelles (the 28th International Conference on Computational Linguistics) en 2020.

Towards a First Automatic Unsupervised Morphological Segmentation for Inuinnaqtun

no code implementations NAACL (AmericasNLP) 2021 Ngoc Tan Le, Fatiha Sadat

Low-resource polysynthetic languages pose many challenges in NLP tasks, such as morphological analysis and Machine Translation, due to available resources and tools, and the morphologically complex languages.

Machine Translation Morphological Analysis +1

Automatic Spell Checker and Correction for Under-represented Spoken Languages: Case Study on Wolof

no code implementations22 May 2023 Thierno Ibrahima Cissé, Fatiha Sadat

This paper presents a spell checker and correction tool specifically designed for Wolof, an under-represented spoken language in Africa.

Neural Supervised Domain Adaptation by Augmenting Pre-trained Models with Random Units

no code implementations9 Jun 2021 Sara Meftah, Nasredine Semmar, Youssef Tamaazousti, Hassane Essafi, Fatiha Sadat

In the standard fine-tuning scheme of TL, a model is initially pre-trained on a source domain and subsequently fine-tuned on a target domain and, therefore, source and target domains are trained using the same architecture.

Chunking Domain Adaptation +5

Multi-Task Supervised Pretraining for Neural Domain Adaptation

no code implementations WS 2020 Sara Meftah, Nasredine Semmar, Mohamed-Ayoub Tahiri, Youssef Tamaazousti, Hassane Essafi, Fatiha Sadat

Two prevalent transfer learning approaches are used in recent works to improve neural networks performance for domains with small amounts of annotated data: Multi-task learning which involves training the task of interest with related auxiliary tasks to exploit their underlying similarities, and Mono-task fine-tuning, where the weights of the model are initialized with the pretrained weights of a large-scale labeled source domain and then fine-tuned with labeled data of the target domain (domain of interest).

Domain Adaptation Multi-Task Learning

Augmenting Named Entity Recognition with Commonsense Knowledge

no code implementations WS 2019 Gaith Dekhili, Tan Ngoc Le, Fatiha Sadat

Commonsense can be vital in some applications like Natural Language Understanding (NLU), where it is often required to resolve ambiguity arising from implicit knowledge and underspecification.

named-entity-recognition Named Entity Recognition +2

Exploration de l'apprentissage par transfert pour l'analyse de textes des r\'eseaux sociaux (Exploring neural transfer learning for social media text analysis )

no code implementations JEPTALNRECITAL 2019 Sara Meftah, Nasredine Semmar, Youssef Tamaazousti, Hassane Essafi, Fatiha Sadat

L{'}apprentissage par transfert repr{\'e}sente la capacit{\'e} qu{'}un mod{\`e}le neuronal entra{\^\i}n{\'e} sur une t{\^a}che {\`a} g{\'e}n{\'e}raliser suffisamment et correctement pour produire des r{\'e}sultats pertinents sur une autre t{\^a}che proche mais diff{\'e}rente.

Transfer Learning

Sequence to Sequence Learning for Query Expansion

no code implementations25 Dec 2018 Salah Zaiem, Fatiha Sadat

Using sequence to sequence algorithms for query expansion has not been explored yet in Information Retrieval literature nor in Question-Answering's.

Information Retrieval Keyword Extraction +3

Using Neural Transfer Learning for Morpho-syntactic Tagging of South-Slavic Languages Tweets

no code implementations COLING 2018 Sara Meftah, Nasredine Semmar, Fatiha Sadat, Stephan Raaijmakers

In this paper, we describe a morpho-syntactic tagger of tweets, an important component of the CEA List DeepLIMA tool which is a multilingual text analysis platform based on deep learning.

Part-Of-Speech Tagging Transfer Learning

Translitt\'eration automatique pour une paire de langues peu dot\'ee ()

no code implementations JEPTALNRECITAL 2017 Ngoc Tan Le, Fatiha Sadat, Lucie M{\'e}nard

Dans ce travail de recherche, nous pr{\'e}sentons une d{\'e}monstration de conversion de graph{\`e}me en phon{\`e}me pour pallier au probl{\`e}me de translitt{\'e}ration pour une paire de langues peu dot{\'e}e, avec une application sur fran{\c{c}}ais-vietnamien.

Named Entity Recognition and Hashtag Decomposition to Improve the Classification of Tweets

no code implementations WS 2016 Billal Belainine, Alexs Fonseca, ro, Fatiha Sadat

We evaluate and compare several automatic classification systems using part or all of the items described in our contributions and found that filtering by part of speech and named entity recognition dramatically increase the classification precision to 77. 3 {\%}.

Classification General Classification +3

UQAM-NTL: Named entity recognition in Twitter messages

no code implementations WS 2016 Ngoc Tan Le, Fatma Mallek, Fatiha Sadat

This paper describes our system used in the 2nd Workshop on Noisy User-generated Text (WNUT) shared task for Named Entity Recognition (NER) in Twitter, in conjunction with Coling 2016.

BIG-bench Machine Learning Language Modelling +3

Lexfom: a lexical functions ontology model

no code implementations WS 2016 Alexs Fonseca, ro, Fatiha Sadat, Fran{\c{c}}ois Lareau

For example, the antonymy is a type of relation that is represented by the lexical function Anti: Anti(big) = small.

Relation

Building a Bilingual Vietnamese-French Named Entity Annotated Corpus through Cross-Linguistic Projection

no code implementations JEPTALNRECITAL 2015 Ngoc Tan Le, Fatiha Sadat

This paper focuses on an automatic construction of named entity annotated corpora for Vietnamese-French, a less-resourced pair of languages.

Cannot find the paper you are looking for? You can Submit a new open access paper.