Search Results for author: Salah Ghamizi

Found 16 papers, 8 papers with code

RobustBlack: Challenging Black-Box Adversarial Attacks on State-of-the-Art Defenses

no code implementations30 Dec 2024 Mohamed Djilani, Salah Ghamizi, Maxime Cordy

Although adversarial robustness has been extensively studied in white-box settings, recent advances in black-box attacks (including transfer- and query-based approaches) are primarily benchmarked against weak defenses, leaving a significant gap in the evaluation of their effectiveness against more recent and moderate robust models (e. g., those featured in the Robustbench leaderboard).

Adversarial Robustness

TabularBench: Benchmarking Adversarial Robustness for Tabular Deep Learning in Real-world Use-cases

1 code implementation14 Aug 2024 Thibault Simonetto, Salah Ghamizi, Maxime Cordy

In addition to our open benchmark (https://github. com/serval-uni-lu/tabularbench) where we welcome submissions of new models and defenses, we implement 7 robustification mechanisms inspired by state-of-the-art defenses in computer vision and propose the largest benchmark of robust tabular deep learning over 200 models across five critical scenarios in finance, healthcare and security.

Adversarial Robustness Benchmarking +1

SafePowerGraph: Safety-aware Evaluation of Graph Neural Networks for Transmission Power Grids

1 code implementation17 Jul 2024 Salah Ghamizi, Aleksandar Bojchevski, Aoxiang Ma, Jun Cao

We provide at https://github. com/yamizi/SafePowerGraph our open-source repository, a comprehensive leaderboard, a dataset and model zoo and expect our framework to standardize and advance research in the critical field of GNN for power systems.

Graph Attention Self-Supervised Learning

Robustness Analysis of AI Models in Critical Energy Systems

no code implementations20 Jun 2024 Pantelis Dogoulis, Matthieu Jimenez, Salah Ghamizi, Maxime Cordy, Yves Le Traon

This paper analyzes the robustness of state-of-the-art AI-based models for power grid operations under the $N-1$ security criterion.

Constrained Adaptive Attack: Effective Adversarial Attack Against Deep Neural Networks for Tabular Data

1 code implementation2 Jun 2024 Thibault Simonetto, Salah Ghamizi, Maxime Cordy

State-of-the-art deep learning models for tabular data have recently achieved acceptable performance to be deployed in industrial settings.

Adversarial Attack Adversarial Robustness

PowerFlowMultiNet: Multigraph Neural Networks for Unbalanced Three-Phase Distribution Systems

no code implementations1 Mar 2024 Salah Ghamizi, Jun Cao, Aoxiang Ma, Pedro Rodriguez

PowerFlowMultiNet outperforms traditional methods and other deep learning approaches in terms of accuracy and computational speed.

Graph Embedding

Hazards in Deep Learning Testing: Prevalence, Impact and Recommendations

no code implementations11 Sep 2023 Salah Ghamizi, Maxime Cordy, Yuejun Guo, Mike Papadakis, And Yves Le Traon

To this end, we survey the related literature and identify 10 commonly adopted empirical evaluation hazards that may significantly impact experimental results.

Deep Learning

How do humans perceive adversarial text? A reality check on the validity and naturalness of word-based adversarial attacks

no code implementations24 May 2023 Salijona Dyrmishi, Salah Ghamizi, Maxime Cordy

Natural Language Processing (NLP) models based on Machine Learning (ML) are susceptible to adversarial attacks -- malicious algorithms that imperceptibly modify input text to force models into making incorrect predictions.

Adversarial Text

GAT: Guided Adversarial Training with Pareto-optimal Auxiliary Tasks

1 code implementation6 Feb 2023 Salah Ghamizi, Jingfeng Zhang, Maxime Cordy, Mike Papadakis, Masashi Sugiyama, Yves Le Traon

While leveraging additional training data is well established to improve adversarial robustness, it incurs the unavoidable cost of data collection and the heavy computation to train models.

Adversarial Robustness Data Augmentation +1

On The Empirical Effectiveness of Unrealistic Adversarial Hardening Against Realistic Adversarial Attacks

1 code implementation7 Feb 2022 Salijona Dyrmishi, Salah Ghamizi, Thibault Simonetto, Yves Le Traon, Maxime Cordy

While the literature on security attacks and defense of Machine Learning (ML) systems mostly focuses on unrealistic adversarial examples, recent research has raised concern about the under-explored field of realistic adversarial attacks and their implications on the robustness of real-world systems.

Adversarial Robustness Malware Detection +2

Adversarial Embedding: A robust and elusive Steganography and Watermarking technique

no code implementations14 Nov 2019 Salah Ghamizi, Maxime Cordy, Mike Papadakis, Yves Le Traon

The key idea of our method is to use deep neural networks for image classification and adversarial attacks to embed secret information within images.

Adversarial Attack Image Classification +2

Automated Search for Configurations of Deep Neural Network Architectures

1 code implementation9 Apr 2019 Salah Ghamizi, Maxime Cordy, Mike Papadakis, Yves Le Traon

First, we model the variability of DNN architectures with a Feature Model (FM) that generalizes over existing architectures.

Image Classification valid

Cannot find the paper you are looking for? You can Submit a new open access paper.