Search Results for author: Mohamed Amine Ferrag

Found 8 papers, 1 papers with code

Do Neutral Prompts Produce Insecure Code? FormAI-v2 Dataset: Labelling Vulnerabilities in Code Generated by Large Language Models

no code implementations29 Apr 2024 Norbert Tihanyi, Tamas Bisztray, Mohamed Amine Ferrag, Ridhi Jain, Lucas C. Cordeiro

This study provides a comparative analysis of state-of-the-art large language models (LLMs), analyzing how likely they generate vulnerabilities when writing simple C programs using a neutral zero-shot prompt.

CyberMetric: A Benchmark Dataset for Evaluating Large Language Models Knowledge in Cybersecurity

no code implementations12 Feb 2024 Norbert Tihanyi, Mohamed Amine Ferrag, Ridhi Jain, Merouane Debbah

Large Language Models (LLMs) excel across various domains, from computer vision to medical diagnostics.

SecureFalcon: The Next Cyber Reasoning System for Cyber Security

no code implementations13 Jul 2023 Mohamed Amine Ferrag, Ammar Battah, Norbert Tihanyi, Merouane Debbah, Thierry Lestable, Lucas C. Cordeiro

Software vulnerabilities leading to various detriments such as crashes, data loss, and security breaches, significantly hinder the quality, affecting the market adoption of software applications and systems.

C++ code Fault localization +1

The FormAI Dataset: Generative AI in Software Security Through the Lens of Formal Verification

no code implementations5 Jul 2023 Norbert Tihanyi, Tamas Bisztray, Ridhi Jain, Mohamed Amine Ferrag, Lucas C. Cordeiro, Vasileios Mavroeidis

This paper presents the FormAI dataset, a large collection of 112, 000 AI-generated compilable and independent C programs with vulnerability classification.

Revolutionizing Cyber Threat Detection with Large Language Models: A privacy-preserving BERT-based Lightweight Model for IoT/IIoT Devices

no code implementations25 Jun 2023 Mohamed Amine Ferrag, Mthandazo Ndhlovu, Norbert Tihanyi, Lucas C. Cordeiro, Merouane Debbah, Thierry Lestable, Narinderjit Singh Thandi

The field of Natural Language Processing (NLP) is currently undergoing a revolutionary transformation driven by the power of pre-trained Large Language Models (LLMs) based on groundbreaking Transformer architectures.

Language Modelling Privacy Preserving

A New Era in Software Security: Towards Self-Healing Software via Large Language Models and Formal Verification

1 code implementation24 May 2023 Yiannis Charalambous, Norbert Tihanyi, Ridhi Jain, Youcheng Sun, Mohamed Amine Ferrag, Lucas C. Cordeiro

In this paper we present a novel solution that combines the capabilities of Large Language Models (LLMs) with Formal Verification strategies to verify and automatically repair software vulnerabilities.

C++ code

Poisoning Attacks in Federated Edge Learning for Digital Twin 6G-enabled IoTs: An Anticipatory Study

no code implementations21 Mar 2023 Mohamed Amine Ferrag, Burak Kantarci, Lucas C. Cordeiro, Merouane Debbah, Kim-Kwang Raymond Choo

However, we need to also consider the potential of attacks targeting the underlying AI systems (e. g., adversaries seek to corrupt data on the IoT devices during local updates or corrupt the model updates); hence, in this article, we propose an anticipatory study for poisoning attacks in federated edge learning for digital twin 6G-enabled IoT environments.

Federated Learning Privacy Preserving

A novel Two-Factor HoneyToken Authentication Mechanism

no code implementations16 Dec 2020 Vassilis Papaspirou, Leandros Maglaras, Mohamed Amine Ferrag, Ioanna Kantzavelou, Helge Janicke

The majority of systems rely on user authentication on passwords, but passwords have so many weaknesses and widespread use that easily raise significant security concerns, regardless of their encrypted form.

Cryptography and Security

Cannot find the paper you are looking for? You can Submit a new open access paper.