no code implementations • 31 Oct 2024 • Phil Wee, Riyadh Baghdadi
However, one of the key limitations that have been observed with these models is their propensity to hallucinate significantly more often than larger models.
no code implementations • 17 Sep 2024 • Nazim Bendib, Iheb Nassim Aouadj, Riyadh Baghdadi
In this project, we introduce the first RL environment for the MLIR compiler, dedicated to facilitating MLIR compiler research, and enabling automatic code optimization using Multi-Action Reinforcement Learning.
no code implementations • 7 Aug 2024 • Inas Bachiri, Hadjer Benmeziane, Smail Niar, Riyadh Baghdadi, Hamza Ouarnoughi, Abdelkrime Aries
Two notable techniques employed to achieve this goal are Hardware-aware Neural Architecture Search (HW-NAS) and Automatic Code Optimization (ACO).
Hardware Aware Neural Architecture Search Neural Architecture Search +1
no code implementations • 14 Jul 2024 • Marwa Naïr, Kamel Yamani, Lynda Said Lhadj, Riyadh Baghdadi
In this paper, we explore the potential of curriculum learning in enhancing the performance of these models.
no code implementations • 20 Mar 2024 • Aymene Berriche, Mehdi Adjal Zakaria, Riyadh Baghdadi
In this study, we explore the efficacy of employing high-resolution features learned through state-of-the-art techniques for image retrieval tasks.
no code implementations • 18 Mar 2024 • Massinissa Merouani, Khaled Afif Boudaoud, Iheb Nassim Aouadj, Nassim Tchoulak, Islem Kara Bernou, Hamza Benyamina, Fatima Benbouzid-Si Tayeb, Karima Benatchba, Hugh Leather, Riyadh Baghdadi
In this paper, we introduce LOOPer, the first polyhedral autoscheduler that uses a deep-learning based cost model and covers a large set of affine transformations and programs.
1 code implementation • 11 Mar 2024 • Kamel Yamani, Marwa Naïr, Riyadh Baghdadi
In recent years, data has emerged as the new gold, serving as a powerful tool for creating intelligent systems.
1 code implementation • 7 May 2023 • Hazem Ibrahim, Fengyuan Liu, Rohail Asim, Balaraju Battu, Sidahmed Benabderrahmane, Bashar Alhafni, Wifag Adnan, Tuka Alhanai, Bedoor AlShebli, Riyadh Baghdadi, Jocelyn J. Bélanger, Elena Beretta, Kemal Celik, Moumena Chaqfeh, Mohammed F. Daqaq, Zaynab El Bernoussi, Daryl Fougnie, Borja Garcia de Soto, Alberto Gandolfi, Andras Gyorgy, Nizar Habash, J. Andrew Harris, Aaron Kaufman, Lefteris Kirousis, Korhan Kocak, Kangsan Lee, Seungah S. Lee, Samreen Malik, Michail Maniatakos, David Melcher, Azzam Mourad, Minsu Park, Mahmoud Rasras, Alicja Reuben, Dania Zantout, Nancy W. Gleason, Kinga Makovi, Talal Rahwan, Yasir Zaki
Moreover, current AI-text classifiers cannot reliably detect ChatGPT's use in school work, due to their propensity to classify human-written answers as AI-generated, as well as the ease with which AI-generated text can be edited to evade detection.
no code implementations • 8 Jun 2022 • Massinissa Merouani, Khaled Afif Boudaoud, Iheb Nassim Aouadj, Nassim Tchoulak, Fatima Benbouzid-Sitayeb, Karima Benatchba, Hugh Leather, Riyadh Baghdadi
In this paper, we present a work in progress about a deep learning based approach for automatic code optimization in polyhedral compilers.
no code implementations • 11 Apr 2021 • Riyadh Baghdadi, Massinissa Merouani, Mohamed-Hicham Leghettas, Kamel Abdous, Taha Arbaoui, Karima Benatchba, Saman Amarasinghe
Unlike previous models, the proposed one works on full programs and does not rely on any heavy feature engineering.
no code implementations • 2 Jul 2020 • Shail Dave, Riyadh Baghdadi, Tony Nowatzki, Sasikanth Avancha, Aviral Shrivastava, Baoxin Li
Machine learning (ML) models are widely used in many important domains.
no code implementations • 7 May 2020 • Riyadh Baghdadi, Abdelkader Nadir Debbagh, Kamel Abdous, Fatima Zohra Benhamida, Alex Renda, Jonathan Elliott Frankle, Michael Carbin, Saman Amarasinghe
In this paper, we demonstrate a compiler that can optimize sparse and recurrent neural networks, both of which are currently outside of the scope of existing neural network compilers (sparse neural networks here stand for networks that can be accelerated with sparse tensor algebra techniques).
4 code implementations • 2 May 2018 • Yunming Zhang, Mengjiao Yang, Riyadh Baghdadi, Shoaib Kamil, Julian Shun, Saman Amarasinghe
This paper introduces GraphIt, a new DSL for graph computations that generates fast implementations for algorithms with different performance characteristics running on graphs with different sizes and structures.
Programming Languages
3 code implementations • 27 Apr 2018 • Riyadh Baghdadi, Jessica Ray, Malek Ben Romdhane, Emanuele Del Sozzo, Abdurrahman Akkas, Yunming Zhang, Patricia Suriana, Shoaib Kamil, Saman Amarasinghe
This paper introduces Tiramisu, a polyhedral framework designed to generate high performance code for multiple platforms including multicores, GPUs, and distributed machines.