no code implementations • EACL (AdaptNLP) 2021 • Sara Meftah, Nasredine Semmar, Youssef Tamaazousti, Hassane Essafi, Fatiha Sadat
Transfer Learning has been shown to be a powerful tool for Natural Language Processing (NLP) and has outperformed the standard supervised learning paradigm, as it takes benefit from the pre-learned knowledge.
no code implementations • 9 Jun 2021 • Sara Meftah, Nasredine Semmar, Youssef Tamaazousti, Hassane Essafi, Fatiha Sadat
In the standard fine-tuning scheme of TL, a model is initially pre-trained on a source domain and subsequently fine-tuned on a target domain and, therefore, source and target domains are trained using the same architecture.
no code implementations • WS 2020 • Sara Meftah, Nasredine Semmar, Mohamed-Ayoub Tahiri, Youssef Tamaazousti, Hassane Essafi, Fatiha Sadat
Two prevalent transfer learning approaches are used in recent works to improve neural networks performance for domains with small amounts of annotated data: Multi-task learning which involves training the task of interest with related auxiliary tasks to exploit their underlying similarities, and Mono-task fine-tuning, where the weights of the model are initialized with the pretrained weights of a large-scale labeled source domain and then fine-tuned with labeled data of the target domain (domain of interest).
no code implementations • 6 Nov 2019 • Ahmed Ben Saad, Youssef Tamaazousti, Josselin Kherroubi, Alexis He
We tackle the problem of texture inpainting where the input images are textures with missing values along with masks that indicate the zones that should be generated.
no code implementations • JEPTALNRECITAL 2019 • Sara Meftah, Nasredine Semmar, Youssef Tamaazousti, Hassane Essafi, Fatiha Sadat
L{'}apprentissage par transfert repr{\'e}sente la capacit{\'e} qu{'}un mod{\`e}le neuronal entra{\^\i}n{\'e} sur une t{\^a}che {\`a} g{\'e}n{\'e}raliser suffisamment et correctement pour produire des r{\'e}sultats pertinents sur une autre t{\^a}che proche mais diff{\'e}rente.
no code implementations • CVPR 2019 • Dim P. Papadopoulos, Youssef Tamaazousti, Ferda Ofli, Ingmar Weber, Antonio Torralba
From a visual perspective, every instruction step can be seen as a way to change the visual appearance of the dish by adding extra objects (e. g., adding an ingredient) or changing the appearance of the existing ones (e. g., cooking the dish).
no code implementations • NAACL 2019 • Sara Meftah, Youssef Tamaazousti, Nasredine Semmar, Hassane Essafi, Fatiha Sadat
Fine-tuning neural networks is widely used to transfer valuable knowledge from high-resource to low-resource domains.
Ranked #1 on Part-Of-Speech Tagging on Social media
no code implementations • 4 Apr 2019 • John Lin, Mohamed El Amine Seddik, Mohamed Tamaazousti, Youssef Tamaazousti, Adrien Bartoli
We propose a novel learning approach, in the form of a fully-convolutional neural network (CNN), which automatically and consistently removes specular highlights from a single image by generating its diffuse component.
no code implementations • 4 Oct 2018 • Julien Girard, Youssef Tamaazousti, Hervé Le Borgne, Céline Hudelot
This raises the question of how well the original representation is "universal", that is to say directly adapted to many different target-tasks.
1 code implementation • 27 Dec 2017 • Youssef Tamaazousti, Hervé Le Borgne, Céline Hudelot, Mohamed El Amine Seddik, Mohamed Tamaazousti
We also propose a unified framework of the methods based on the diversifying of the training problem.
no code implementations • CVPR 2017 • Youssef Tamaazousti, Herve Le Borgne, Celine Hudelot
In a transfer-learning scheme, the intermediate layers of a pre-trained CNN are employed as universal image representation to tackle many visual classification problems.