1 code implementation • EMNLP (sustainlp) 2020 • Amine Abdaoui, Camille Pradel, Grégoire Sigel
The obtained results confirm that we can generate smaller models that keep comparable results, while reducing up to 45% of the total number of parameters.
no code implementations • 18 Jul 2023 • Amine Abdaoui, Sourav Dutta
When compared with the current state-of-the-art models using standard fine-tuning, the studied method obtains competitive results (even if there is no clear best model in this configuration).
2 code implementations • 25 Sep 2021 • Amine Abdaoui, Mohamed Berrimi, Mourad Oussalah, Abdelouahab Moussaoui
The obtained results show that pre-training a dedicated model on a small dataset (150 MB) can outperform existing models that have been trained on much more data (hundreds of GB).
2 code implementations • 12 Oct 2020 • Amine Abdaoui, Camille Pradel, Grégoire Sigel
The obtained results confirm that we can generate smaller models that keep comparable results, while reducing up to 45% of the total number of parameters.
no code implementations • JEPTALNRECITAL 2020 • Hadjer Khaldi, Amine Abdaoui, Farah Benamara, Gr{\'e}goire Sigel, Nathalie Aussenac-Gilles
L{'}{\'e}valuation de ces mod{\`e}les montre des r{\'e}sultats tr{\`e}s encourageants, ce qui est un premier pas vers l{'}intelligence {\'e}conomique et concurrentielle {\`a} partir de textes pour le fran{\c{c}}ais.