no code implementations • 21 May 2024 • Mohamed Amine Ferrag, Fatima Alwahedi, Ammar Battah, Bilel Cherif, Abdechakour Mechri, Norbert Tihanyi
This paper provides a comprehensive review of the future of cybersecurity through Generative AI and Large Language Models (LLMs).
1 code implementation • 29 Apr 2024 • Norbert Tihanyi, Tamas Bisztray, Mohamed Amine Ferrag, Ridhi Jain, Lucas C. Cordeiro
This study provides a comparative analysis of state-of-the-art large language models (LLMs), analyzing how likely they generate vulnerabilities when writing simple C programs using a neutral zero-shot prompt.
1 code implementation • 12 Feb 2024 • Norbert Tihanyi, Mohamed Amine Ferrag, Ridhi Jain, Tamas Bisztray, Merouane Debbah
The results can serve as a reference for comparing the general cybersecurity knowledge of humans and LLMs.
no code implementations • 13 Jul 2023 • Mohamed Amine Ferrag, Ammar Battah, Norbert Tihanyi, Ridhi Jain, Diana Maimut, Fatima Alwahedi, Thierry Lestable, Narinderjit Singh Thandi, Abdechakour Mechri, Merouane Debbah, Lucas C. Cordeiro
SecureFalcon achieves 94% accuracy in binary classification and up to 92% in multiclassification, with instant CPU inference times.
no code implementations • 5 Jul 2023 • Norbert Tihanyi, Tamas Bisztray, Ridhi Jain, Mohamed Amine Ferrag, Lucas C. Cordeiro, Vasileios Mavroeidis
This paper presents the FormAI dataset, a large collection of 112, 000 AI-generated compilable and independent C programs with vulnerability classification.
no code implementations • 25 Jun 2023 • Mohamed Amine Ferrag, Mthandazo Ndhlovu, Norbert Tihanyi, Lucas C. Cordeiro, Merouane Debbah, Thierry Lestable, Narinderjit Singh Thandi
The field of Natural Language Processing (NLP) is currently undergoing a revolutionary transformation driven by the power of pre-trained Large Language Models (LLMs) based on groundbreaking Transformer architectures.
1 code implementation • 24 May 2023 • Norbert Tihanyi, Ridhi Jain, Yiannis Charalambous, Mohamed Amine Ferrag, Youcheng Sun, Lucas C. Cordeiro
Initially, we employ Bounded Model Checking (BMC) to identify vulnerabilities and extract counterexamples.